This project started as a simple idea and quickly turned into an interesting experiment: What happens if I let AI help with everything?
Note: If you click on the title of the youtube video it goes to the list
This project became a small milestone for me: my first piece where music, visuals, and code were all created with AI, then brought together into a finished video.
The code for the visuals can be found here: https://github.com/goph-R/MusicFX
The music itself — along with the album cover image — was generated using MusicGeneratorAI.com (model V5).
To give the music a visual identity, I worked on a custom audio-reactive visualizer. The visualization code was written with the help of ChatGPT (model 5.2), which also assisted in upscaling the album cover image for video use. The goal was not just to display sound, but to translate rhythm and energy into motion in a clean, futuristic way.
For the background, I used a looping AI-generated video created with KlingAI, video model 2.6. The video adds atmosphere and depth while staying subtle enough to let the music and visualizer remain the focus.
The technical workflow was intentionally simple and transparent:
- The visualizer was hosted locally using npm serve
- Displayed in a Brave browser
- Recorded in real time using OBS
- Final cutting and rendering done in DaVinci Resolve 20
What I find most interesting about this project is not that AI was involved everywhere — but that human decisions still guided every step. From choosing the sounds, adjusting the visuals, refining the timing, and assembling the final video, AI acted as a creative collaborator rather than a replacement.
This project represents an early experiment in combining AI-generated music, AI-assisted code, and AI-generated visuals into a single cohesive piece — and it’s definitely not the last.
