Jeanyoon Choi

← Back to all posts

Conducting Screens

Updated: 3/20/2024

Think of the case: One Mobile, Multiple Screens. The audience controls the mobile device to alternate the contents spread across the screens.
Lots of options in the contemporary use of mobile devices… Scroll, Touch, Pinch, lots of finger-based gestures using the touch interface, but what is lacking?
More phenomenological approach?
As many mobile devices nowadays have accelerometers and motion sensors, why not use this feature?
This is like an act of conducting… Conducting, with the mobile phone. It perfectly resonates with the act. 
Beyond using just ‘fingers’, as was the case for our daily mobile interaction… Now it’s time to expand to the whole ‘hand’. A beautiful, poetic, phenomenological transition from the finger to the hand.
And what more? Then just the act of using hand and mobile sensor from the mobile input end? → We have the multi-devices output on the screens side (screens can be laptop/projector, but the key factor is that it’s in multi-device format)
Just like an experience of orchestration, the conducting → Condutor never conducts a single instrument, they manage the whole rhythm made from number of instruments that are placed spanning across the space..
Similarly, once we ‘conduct’ on the mobile, the output devices (now I’m thinking this can be also mobile devices, don’t have to limit the scope to just computer, but also think the price per device) now react correspondingly, creating harmonic audio-visual, which are placed around the space… The input conduction - output device audio visual harmonically resonates.
Now the real problem: What kind of contents are altered within the act of conduction? We have a sort of an overall form: User conducting via mobile, Screens reacting correspondingly. Now the problem is, What will be the context? What will be the specific use cases? What will be the potential narrative / storytelling / experience-telling?
What will be the trackable input?
Javascript device motion/device acceleration
What can this induce?
A. 3D Scene, scene transforming/rotating correspondingly to the motion?
Scene view itself might rotate
Or the object might be morphed accordingly
Or it can be both
B. Spatial Audio: Audio somehow increases / decreases correspondingly to the acceleration?
C. More primitively: Screen turning on/off, white ←> black based on the acceleration?
Kind of speculating the primitive approach
Can there be potential intersections with the projection mapping?
D. How about morphing the 2D graphics? Speculating Fluid Interfaces?
E. How about morphing screens full of texts/numbers? How about morphing maps? 
F. How to harmonically combine the motion and the acceleration data and resonate with the both?
G. Maybe it can be very poetic… Narrative-based, storytelling-based, very subtle, the texts wandering around… As the user conducts, some kind of melody (audio-visual melody) continously generates → Kind of story emerges from these multiple screens. More referencing to the primitive orchestra form of the orchestration conducting.

Also, can there be a derivate form?
A. Why just only mobiles? What about mouse? Doing the same thing with the mouse might be also poetic, and a bit retro (might be more artistic references there). A couple of interesting web art might have tried this, but their output was not really multi-device (It is usually limited to single screen/tab output). How can this be expanded towards multi-device?
B. What about having two or more audiences? Interfering with each other? Two or more mobile phones? Can use the difference and the resonance, and the tension generated from them? Remember to use the sine/cosine-shaped wave functions when tracking these systems. What about when the number of audiences are way much higher? Like 100 or so?


Text written by Jeanyoon Choi

Ⓒ Jeanyoon Choi, 2025