This series of exploration builds a technical cognitive map and design judgment system. It is an in-depth thinking on how to integrate “visual expression, system structure and interactive experience”.
In this project, I tried to implement a physical simulation of water flow through NVIDIA Flex Solver, combined with a music driver and a stereo projection device for visualization and testing. My original intention was to utilize the real physical expressiveness of its particle system to simulate the dynamics of a water body with natural gravity and fluidity.
During the process, I found that although it has the advantage of physical accuracy, there are obvious limitations in controlling particle behavior, operation efficiency and system expansion, and it is not suitable for large-scale, well-structured and continuous flow scenarios. This stage of exploration helped me identify the contradiction between realistic simulation and visual controllability, and provided a basis for judgment in my subsequent method selection.
In this phase, I focus on selecting the most controllable and visually expressive implementation paths from the previous explorations, and try to build them into a complete system that can be run and interacted with. After comparing and testing various approaches, I finally established the main path of GPU particle system + Kinect interaction, supplemented by GLSL local visual enhancement and UE5 rendering video embedding to realize the spatial structure and visual hierarchy of the overall water flow.
I focused on controllability, gravity, structural clarity, and dynamic coherence in the interactive system, and introduced Kinect real-time control mechanism in conjunction with the audience's behavior to realize the response of human flow to the trajectory of the waterfall. At the same time, point cloud import experiment as an extension direction, try to combine modeling data and spatial information display, for the future construction of a more complete information interaction scene (such as campus tour) idea to lay the foundation, and consider access to other building scanning data. Stable Diffusion and call API I will also consider access to the large models that do not need to be licensed, to try to continue the development of the future.
The focus in this phase is on integration, optimization and judgment to ensure that the final presentation is not only visually appealing, but also logically stable and extensible.
This project is a systematic exploration around TouchDesigner. I have extensively tried various visual implementation paths and conducted cross-platform experiments with Kinect, Leap Motion, and GLSL to summarize and judge the advantages, boundaries, and applicability scenarios of these tools.
After comparative analysis, I established an evaluation framework based on multi-dimensional criteria such as controllability, system load, interactivity and spatial adaptation, and clarified the logic of technology selection. Finally, I determined the main path centered on GPU particles and Kinect interaction, and explored the extension direction of point cloud model import and information display.
The whole research process not only builds up the technical implementation capability, but also strengthens my understanding of the relationship between user behavior, spatial logic and visual expression, and establishes a way of thinking to promote the project from the dual perspectives of “interaction logic + visual expression”.
Powered by w3.css