Special Topics Case Study

Research & Activity Documentation

Wanqing Li


Introduction


Using TouchDesigner as the core platform, this project explores the theme of “how to build immersive real-time systems through visual, physical and interactive mechanisms”. I will gradually move from technical testing to design decision-making, and from visual presentation to systematic understanding. Combining double-diamond modeling and inductive/deductive thinking, we advance the exploration.

This series of exploration builds a technical cognitive map and design judgment system. It is an in-depth thinking on how to integrate “visual expression, system structure and interactive experience”.

Module 1: Concept


Research

In the first phase of exploration, I focused on TouchDesigner's core features and potentials, and sorted out its eight key strengths in building interactive visual systems: including real-time interaction capability, data integration capability (e.g., APIs, sensors, and audio), generative design capability, and artistic immersive expression. In addition, TouchDesigner's modular node structure, cross-disciplinary integration capability, GPU-driven high-performance output and good scalability also provide strong support for the subsequent realization of complex interactive logic and cross-platform integration.

Action Research Phase 1

Automatic particle generation combined with touch screen
Projection Mapping  Projection Mapping,Live Audio Visualization

Action Research Phase 2

API Real-time, Automated Generative Art Remote control of interactive projection with touchpad or cell phone Present visualization in XR (VR/AR)

Module 2: Prototype


Research

I systematically compared TouchDesigner with Processing, ShaderToy, After Effects, MadMapper and other tools, with the purpose of clarifying the application scenarios of each platform, judging its advantageous boundaries and integration methods, laying the foundation for the subsequent system construction and technology selection, and further understanding the possibility and necessity of cross-platform collaboration. We further understand the possibility and necessity of cross-platform collaboration.

Action Research Phase 1


Action Research Phase 2


Project 2: Prototype


In this project, I tried to implement a physical simulation of water flow through NVIDIA Flex Solver, combined with a music driver and a stereo projection device for visualization and testing. My original intention was to utilize the real physical expressiveness of its particle system to simulate the dynamics of a water body with natural gravity and fluidity.

During the process, I found that although it has the advantage of physical accuracy, there are obvious limitations in controlling particle behavior, operation efficiency and system expansion, and it is not suitable for large-scale, well-structured and continuous flow scenarios. This stage of exploration helped me identify the contradiction between realistic simulation and visual controllability, and provided a basis for judgment in my subsequent method selection.


Module 3: Product


Research

In this phase, I focus on selecting the most controllable and visually expressive implementation paths from the previous explorations, and try to build them into a complete system that can be run and interacted with. After comparing and testing various approaches, I finally established the main path of GPU particle system + Kinect interaction, supplemented by GLSL local visual enhancement and UE5 rendering video embedding to realize the spatial structure and visual hierarchy of the overall water flow.

I focused on controllability, gravity, structural clarity, and dynamic coherence in the interactive system, and introduced Kinect real-time control mechanism in conjunction with the audience's behavior to realize the response of human flow to the trajectory of the waterfall. At the same time, point cloud import experiment as an extension direction, try to combine modeling data and spatial information display, for the future construction of a more complete information interaction scene (such as campus tour) idea to lay the foundation, and consider access to other building scanning data. Stable Diffusion and call API I will also consider access to the large models that do not need to be licensed, to try to continue the development of the future.

Action Research Phase 1


Action Research Phase 2

Stable diffusion and call API

Project Product


The focus in this phase is on integration, optimization and judgment to ensure that the final presentation is not only visually appealing, but also logically stable and extensible.


Research Summation


This project is a systematic exploration around TouchDesigner. I have extensively tried various visual implementation paths and conducted cross-platform experiments with Kinect, Leap Motion, and GLSL to summarize and judge the advantages, boundaries, and applicability scenarios of these tools.

After comparative analysis, I established an evaluation framework based on multi-dimensional criteria such as controllability, system load, interactivity and spatial adaptation, and clarified the logic of technology selection. Finally, I determined the main path centered on GPU particles and Kinect interaction, and explored the extension direction of point cloud model import and information display.

The whole research process not only builds up the technical implementation capability, but also strengthens my understanding of the relationship between user behavior, spatial logic and visual expression, and establishes a way of thinking to promote the project from the dual perspectives of “interaction logic + visual expression”.

×

Powered by w3.css