Jeanyoon Choi

← Back to all posts

Semantic Interaction: Dimensional Transformation

Updated: 2/8/2025

I talked about the importance of semantic interaction and how we should design it. The main challenge: people do not eager to participate with semantic interaction from the first point. When encountered with an empty text field, they don’t know what to write. They just write random word (which is the case of my self) - Same for the voice interaction, word interaction, etc…. When talking about semantics, naturally ‘language’ based interaction comes out easily from the mind, but within exhibition setting, as an alien visitor, it is unlikely to interact/fulfill this linguistic requirement in a full manner. Audiences just usually murmur or skip through such interaction without deep semantic meaning…
Which is also the reason why lots of interactions within new media art just focuses on non-semantic interaction, especially movement based interaction. Which does not require active/semantic involvement from user side: Body tracking, face tracking, bodily movement, EEG, etc…. BUT WE CAN DO MORE THAN THIS. THESE interactions are non-semantic, and as result, usually just end up in abstractive audio-visual control without deeper meaning/connection to the society. BUT we need a way to carefully design semantic interaction and thus open a new field of expression… But how? How to make non-engaging/passive audience to involve within semantic interaction?
One of the possible methods: Dimensional Transformation of Non-semantic interaction into semantic output. Proceedingly, audiences create a semantic connection within the interaction by themselves naturally.
We keep on sticking with the old-school physical non-semantic interaction: Body tracking, face tracking, bodily movement, EEG… Which all converts the physical (and mostly non-semantic) movement into x,y (2D) coordinates or x,y,z (3D) coordinates. What we can do is use these x,y positional params into different dimension - dimensional transformation - of the output world/system. As a very naive example, x, y params can be transformed into t, and $ axis (temporal-monetary plane), accordingly.
Going one step further: Traditional web-based interfaces are good at this. When we’re scrolling down through instagram/youtube/google, we sometimes just scroll down for the sake of scrolling down - for the sake of the act of its interaction itself (we’re sort of addicted into it) – and then we confront with the reacting correspondingly appearing contents after we scroll through. We’re doing a non-semantic interaction, which results in semantic output. Naturally my assumption - Practical, user-friendly (i.e. interface that user had seen a lot) interface presented → User do some interaction (scrolling down, button click, etc.) → good gateway to open up a semantic interaction, and more natural than just having an empty text field and requiring the user to fill out this. It’s cognitively easier to click on the buttons/scroll through the existing interface from an input side, which will create semantic output within the surroundings. This interaction also still contains more semantics than just requiring user to dance, use their facial expressions, or connect with EEG sensors. All possible within multi-device web artwork context.
WITHIN THESE PROCESSES DOMAIN TRANSFORMATION (Unexpected domain transformation) might be important: This is completely about artistic imagination


Text written by Jeanyoon Choi

Ⓒ Jeanyoon Choi, 2025