Data Visualisation: What is the traditional definition? Just aggregation of graphs? Dashboard-like? You know?
But broadly speaking Data Visualisation can be way much more - especially in the form of computational art, every visualisation (realtime visualisation, parameter-based visualisation), is, at the end, data visualisation. Well 3d visuals/moving image is also sort of visualisation (and reading the data), but it is too broad within my terminology - it’s playing a static set of pre-defined data, and reading the pixelated form rather than a bit-by-bit data where it directly influences the output. So I will limit the scope within web art/software art - but basically every possible output that the web and software art, multi-device web artwork can occupy, is a ‘data visualisation’.
Software art - aggregation of data, defining the relationship and interconnectedness between nodes and input - thus calculating the data, processing the data, and showing the data. When the output of the software art shows the chunk of text, it’s basically showing the processed data and visualising it (in a form of text as an aesthetic unit). When the software art is displaying a sort of 3D sculpture which is morphing according to the user interaction, it is showing the data visualisation. When it’s showing a pack of web images - that’s data visualisation. All colour, lighting, text, every information is, ultimately determined and processed from data to a form of visualisation.
Highly-modularised software art (pure software - meaning that it less relies on a pre-defined/installed asset and uses more of real-time generations/interactive parameter-based elements) should always lean more towards a data visualisation. The more computational/generative the output of the software art and multi-device web art is, the more the role of the real-time data/computational algorithm of processing them plays. Thus, in this term, the purely modularised/algorithm-oriented computational art will, at least theoretically, rely solely on the interconnection and the inter-data algorithm - Thus the connection and the asset (data) matters. Thus the output will, theoretically, always display data visualisation (processed relationship between data).
This is more possible with the use of generative ai - by definition, the combination of data to generate/amplify the information. Most used form for example: Text-to-Image - given the low-bit text data (which can be direct user input/processed user input from non-semantic data) can be transformed into high-bit image output. Well LLM plays more indirect but playful role: LLM itself can write an algorithm, and that chunk of algorithm itself can be generated. (We might still go for additional validation process for this algorithmic result) So before: only the assets are generative. Now: the inter-connection itself can be generated. This enables even more indirect and surprising results than the mono-dimensional generative AI, and can be thus used to create a highly-modularised automatic system-based platformatic software art.
WHAT WILL BE THE PRACTICAL EXAMPLE FOR THIS? WHAT WILL BE THE DESCRIPTIVE EXAMPLE FOR THIS?
Text written by Jeanyoon Choi
Ⓒ Jeanyoon Choi, 2025