Semantic Interaction ←> Phenomenological Interaction
Always important to mix these two trade-off values: Semantic-yet-Phenomenonological or Phenomenological-and-Semantic
One different approach: There is an explicit commercial interface (which users are familiar with), and whilst user interacts with this, the behavioural aspect of interaction ((x,y,t) coordinate for instance - (x, y) mobile coordinate, t the time of interaction) is collected and transformed into different layer – acting as parameter for parametric input changing the characteristics of the system.
For instance: let’s say there is a shopping e-commerce application mimicked. Very intuitive to use, users had seen a lot of interfaces like this before. They interact. And there is a sort of semantic interaction going on here as multi-device (let’s call this interactive layer A). But this is more sort of traditional interaction, could be cheesy, not really break through. BUT at the same time as the user is interacting with the commercial interface, the possible collectable data is not only the semantic button click (i.e. product click, product quantity order, check out, etc.) → There is a behaviour data collected behind-the-scene, following like example:
Type 1: Regarding fingers. (x, y, t) coordinate of the finger-based interaction. Most conventional data that can be employed. Type 2: Regarding hands. Device motion & acceleration tracking. Needs user consent in advance (accelerometer) Type 3: Regarding face. Facial recognition (another phen. data) Needs user consent in advance (camera)
These are all ‘phenomenological’ data which I named ‘Type B’ from the following categorisation:
Buttons, Options, Sliders, Toggler, Checkbox, Multi-touch, Number Input, List Selection… → Type A (Direct, Explicit Interaction) Eye Tracking, Voice Recognition, Body Movement, Head Pose, EEG, … -> Type B (Implicit, Sensor-based interaction) Text Input, Voice Recognition → Type C (Language based semantic interaction) External APIs, External Data → Type D (Environmental/Contextual)
So implicit, sensor-based interaction. These data might not have strong semantic sense (it does have semantic data embedded - but not so much. Especially for fingers and hands), but we do have a strong physical-movement based positional data (x,y, t), (a, b, c, t), (____ (Facial Params), t) → From 3d data to high-dimensional data → which can be used for the ‘dimensional transformation’, to create some kind of different effects to the system.
In short, Physical Data → Dimensional Transformation → Adjusted System Params → Affected System Behaviour.
And although this is an implicit/embedded interaction (Interaction Layer B), my assumption is: Once user had understood the mechanism behind physical data → Affected System Behaviour (and spot cognitive correlation/causality), they will then start to interact with this new layer of physical data (which will be transformed into semantic system params) and have fun playing with it, phenomenologically & intuitively interacting, yet affecting the system semantically (i.e. meaningfully).
Real-world example (although not 100% satisfied)
Omega: My artwork. User first goes through MBTI-test like setting. But upon accidental ‘shake’, the interface starts to distort. Each shake triggers ‘omega’, a sign of resistance, propagating to all four connected channels. Ever since then, phone shake also transforms the angle of the whole scene. In short, Accelerometer → Procedural Storytelling (Triggering ‘Omega’ and distorting) , Scene Rotation
SoTA: Also artwork that I led. User first encounters with infinite-scrolling interface with 118 different Neural Networks (Also example of semantic → phenomenology, first user clicks on each option independently, then after user just scrolls down for the sake of fun), which is intuitive interaction, but then after user soon realises that as they shake their phone, the whole 3D three.js scene will be also shaked accordingly. Here: Accerlometer → Scene Rotation
But I hadn’t used the other two extensively yet (Fingers, Face) Nor had I used the data that these information provides (a, b, c, t) extensively Nor had I actually ‘transformed’ this data into different dimension and apply as system params - Other than Omega (Sudden accelerometer spike will trigger ‘Omega’), there was no example that phenomenological type b interaction had actually affected the system dynamics from the first place (Accelerometer mainly used to rotate the 3D Scene)
What would be a more speculative example to use this? Maybe start from small examples? (Like blinking? Gazing? But also want to start from the finger-based interaction – hammer.js?) How to design a dimensional transformation btw phenomenological data → semantic data effectively, intuitively, surprisingly, and transformatively?
This might also additionally serve as a commentary to the homogenous design trend/commercial ui trend → Remember how back old days when the smartphone first came out, we all used to play ‘phenomenology’ based games – Temple Run, Angry Bird, Rhythm Games? All involving finger interaction only? Finger is the best instrument - we can do/could have done so many things with these.. But now what we only do for whole day is scrolling… Going through Reels, Youtube Shorts, just looking at junk contents, scrolling and swiping from one to another… How pathetic is this?
So this is the starting point of interaction - Commercial Interface, so easy to use, so well known, we just scroll, we just use it as how we used to do… But suddenly, the hidden embedded version of interaction arises - phenomenologically - because you reckon that some deep embedded interaction affects the system that you encounter in an unexpected manner, and tada! You grasp this causal connection, and thereafter you interact and adjust your behaviour accordingly. Isn’t that wonderful?
When talking about changing the characteristics of system (through interaction): There are three possible approaches Approach 1: Parameteric Adjustment (usually for numerical, continuous values) Approach 2: Categorical Adjustment (MBTI Test, or selecting one of the 118 Neural Network Models, for example – quite similar to parametric adjustment, but different that parametric is conti’d, categorical is discrete) Approach 3: Contextual Adjustment (Traditionally impossible, now possible through LLMs and AIs - any semantic/language-based contextual information can be adjusted & applied to the system and affect system dynamics, just as how parametric numbers/categorical variables might do)
Text written by Jeanyoon Choi
Ⓒ Jeanyoon Choi, 2025