Project title:

Surreal Vison

One sentence description: Can you summarize your idea in one sentence? Stick to the facts -- what are you planning to make?

I plan to use a ToF Imager sensor to sync sound with p5.js and create visual effects that synchronize audio and visuals.

Note: As for Visualizations I plan to use between using Diffusion Models, Generative Adversarial Networks (GANs), or…I am still working on this (I may need help).

Project abstract: ~250 word description of your project.

This project aims to create a surreal, immersive experience through combining visualization and sound. Using an 8x8 Time-of-Flight (ToF) sensor, the system will capture live data and generate visuals that respond to sound with p5.js. The ToF sensor picks up spatial details that get translated into reactive, dreamlike visuals that shift dynamically with the audio, creating an interactive, sensory experience. For the visuals, I'm exploring tools like Diffusion Models and Generative Adversarial Networks (GANs) to create layered, evolving patterns that mirror changes in sound. I’m still working out which model will work best for the effect (and might need some advice on this), but my goal is to design a fluid, surreal environment that turns ambient sounds into captivating, reactive visuals.

Inspiration: How did you become interested in this idea?

The first time I learned about this sensor, I was inspired and immediately wanted to explore its potential in a creative project. One of the main reasons I took this class was to dive deeper into Diffusion Models and GANs, and I saw this project as the perfect opportunity to develop those skills and bring them together. I’m excited to explore more visualization techniques that can be paired with machine learning, and this project offers a great way to do just that.

Visual reference: Drawings, photos, artworks, texts, or other media that relate to your idea.

Diffusion Links:

Stable Diffusion Website: https://github.com/AUTOMATIC1111/stable-diffusion-webui

Inspiration from readme. https://www.cunicode.com/works/confusing-coleopterists

One of my main source of inspirations:

IMG_2726.mov

Anadol’s Website & documentation on this project:

https://refikanadol.com/works/unsupervised/

Interesting Article I referenced about this Art piece:

https://amplify.nabshow.com/articles/machine-learning-moma-refik-anadol/

Refik Anadol. Unsupervised — Machine Hallucinations — MoMA. 2022.

Refik Anadol. Unsupervised — Machine Hallucinations — MoMA. 2022.

More about the sensor: Use the: **SparkFun Qwiic ToF Imager - VL53L5CX -** Multizone distance measurements are possible up to 8x8 zones with a wide 63° diagonal FoV which can be reduced by software. To create a visualizer and sound system

Audience: Who are you making the project for? How do you expect your audience to interact with your piece? What will their experience be like?

This project is designed for users who are interested in immersive visual experiences. The audience will interact by moving their hands over the sensor, which will trigger visual effects in response to their motions. Their experience will be a blend of motion and visuals, as they see real-time patterns and visuals change based on their gestures, creating a dynamic and personal interaction.

Challenges: What is your biggest technical and/or conceptual challenge you anticipate?

The Visualizations.

Using Diffusion Models, Generative Adversarial Networks (GANs), are something I wanna learn more about and produce. I don’t have a lot of experience in this, but since it’s something I am passionate about I feel like it can work.

Sound.

I have never worked with sound before but, I think this project is the perfect time to start.