ANGIE WANG 尘 野                               HUMAN - NATURE - MACHINES · INTERACTION    人 自然 机器 · 交互      
   CONTACT



BECOMING DATA
hyperSENSE: Augmenting human experience in Environments
40°26'36.5"N 79°56'43.6"W


"Becoming Data" is an interactive exploration of human identity in the digital age. This project uses TouchDesigner to explore blob tracking and motion detection, using algorithm-based design to visualize human motion data from a machine’s perspective. Through real-time motion capture, participants' movements are translated into data points and displayed as lines and rectangles. These shapes represent the traces and clusters of motion, each assigned a unique ID. The project challenges viewers to consider how machines perceive and categorize human behavior, highlighting the disconnect between the complexity of human identity and the binary logic of data and potentially reshaping our understanding of human-machine interactions.


Augumenting Human Experience in Environments, Human-computer Interaction
Algorithm-based Design, Motion Tracking, Data Graphing


Demo

 
 


“In the post-modern era, people are data, and data are people.
 
We have entered the age of Da(t)aist. ”



This statement, proclaimed at the opening of "The Book of Sand" exhibition at the Aiiiii Art Center in Shanghai on October 30, 2021, captures the essence of our current epoch, where the distinction between humans and data increasingly blurs.

Also after reading the book Spooky Technology, a reflection on the Invisible and otherworldly qualities in Everyday Technology, inspired by the concept of ANIMISM, which ascribes consciousness to all entities. This project probes deeper into the interactions between humans and technology. In the realm of modern technology, machines are not merely passive repositories or conduits of human will; rather, they embody the meanings and capabilities we instill in them. But if the conventional roles were inverted, what would humans look like through the eyes of a machine?



Methodology

01 Input

Motion Sensing (camera input): 
This initial stage involves capturing human motion using cameras. The motion data collected is essential as it serves as the primary input for the subsequent processing stages. Cameras detect participants' movement and gestures, converting physical activity into digital data that can be analyzed and visualized.

In this step, a real-time video processing is used, which is using the Threshold Component in Touch Designer software to create a mask for the real-time video.

Human Interaction(Receive Signals from Humans):
Once the motion is sensed, the system processes these signals to identify specific movements and gestures.




02 Generative Design


Algorithm to Generate Connecting Lines of Motion: 
In this phase, the project uses generative design principles powered by an algorithm called Delaunay Triangulation to translate the motion data into visual representations. 

   

Delaunay triangulation’s basic rule is to connect points to form a mesh of triangles where no points are inside the circumcircle of any triangle. This is to generate natural-looking structures and could be employed to create geometric abstractions of the figures.


To achieve triangulation and generate lines between motions, three dynamic lines of facets was isolated to outline the figures in a geometric abstraction. The connections between randomly selected points form clusters with fewer common points.


03 Output: Motion Detection + Blob Tracking Trace + Data Mapping


Motion Detection: 
This sub-component of the output phase involves identifying and cataloging specific movements detected by the camera. The system distinguishes different types of motion to accurately represent them in the visual output.


Blob Tracking Trace: 
In this step, the motion data is further refined to track 'blobs' or clusters of continuous motion. This tracking helps visualize the flow and direction of movements, enhancing the visual narrative created from the data.

Each blob represents a motion-tracking point on the image. Each Blob has its own ID number, UV coordinates (coordinates on the XY axis), life cycle, and the blob's height and width. 

Data Mapping: 
The final step in the output phase involves mapping the processed data onto a visual template. After getting the values of blob ID and UV coordinates, convert them into corresponding pixel coordinate values and then combine the data channels to map the data on the blobs. Finally, map the combined visual effects onto the screen. 

Projection-Mapping:
This final step is to do projection mapping, where the visual data (lines and shapes) are projected back onto the physical space, which is the wall in 4D lab, creating an experience that allows participants to interact with and observe their digitized movements. The projection enhances the interactive experience by merging digital outputs with the physical environment, making the invisible patterns of human motion both visible and tangible.