Uniting sound and visuals?


I am wondering if anybody here combines their visual art with music or sound at all? Not as videos per se, but I am thinking more along the lines of interactive pieces. Anything that harmonizes design and audio. Whatever you have made, I’d love to hear about it.

I ask because I am currently working on some pieces that I have hosted on my own webpage, and was wondering if anyone is doing anything similar. For reference, these are the two works I myself have done: Singing Wavicles, Syntheogon. (Best played on a computer).

If anyone has anything to share, what prompted you do decide to use both together? What do you think is possible in designing what you are designing?


currently exploring career paths with audio/video /// video production/projections /// ive been a glitchy AV nerd for a while and am kinda running a one-person-show with my design/video/audio work at this point in my life.


great stuff!! what programs are you using?


mainly just adobe After Effects; a lot of the fuckery with the visuals is from source footage where i might record some AV mixer feedback loops (got some analog mixers i’ve been circuit bending, but most of the glitchy stuff in the gabber video was done with a Roland V1 and some HA-5/HI-5 sdi converters/sends loops/splitters and projected onto a brick wall and recorded w/ an iPhone.

it’s kinda a clusterfuck of digital video editing, analog and digital glitch/feedback loops, source footage, etc



if you havent seen these guys stuff you will probably enjoy it


Music : Yaporigami
Visual : Kezzardrix

installation piece for audio-visual exhibition “Chromesthesia Resonance”

Curator: YEH Ting-Hao
Exhibition Dates: 2018-09-01 ~ 2018-11-18
Place: Digiark Gallery, National Taiwan Museum Of Fine Arts

We will challenge to present a work focusing on the collapse and formation of rhythm by music and images. The Gestalt collapse is known as a both auditory and visual phenomenon in which you lose cognition of ‘form’ such as rhythms once properly recognised. In ancient Greece, the rhythm was called rhythmos which refers to the formation of ‘forms’ in a wide range of fields including placement of objects and/or establishment of humanity. When recognising the rhythm, the human nature or personal emotions can be evoked. When losing the rhythm, those are put in an ambiguous state until new ‘form’ can be found. Fluctuation on this boundary has a mysterious charm. Through the collapse and formation of rhythm on the time axis, we would like to question how ambiguous our perceptions are.


Thanks for sharing. That was fantastic looking on my cellphone. The installation must have been mind blowing.


I’ve been dabbling. Currently drawing quite naive animations on my iPad and lining it up with music I make. It takes time, but will get quite good, I reckon.

I made this track and video about a year ago. Eurorack synth + Resolume + Ableton Live. Very simplistic stuff compared to the other great things posted in this thread! :slight_smile:

Oh, and this one. Eurorack + Zwobot in Ableton Live:

I’m not exactly a man of ambition and big visions or even hopes, but if there’s ONE thing I’d like to accomplish in life, it’s getting really good at music making and merging it with video making. Only problem is I need cash to live, and so a job, and then there’s just not enough time for video making or music making (and I’m nowhere near good enough to be doing video or music for a living). Maybe I’ll get a decent video out after I’ve retired. :stuck_out_tongue: Having a job, fucking always gets in the way of the good stuff, but then again, I wouldn’t survive (with dignity) without one.


I am doing sound reactive visuals on my live sets. I’ve been working with Unity and I achieved this kind of results (this has been edited, but it’s pretty similar to the live ones).

edit: here is a live snippet - https://www.instagram.com/p/Bsv2dnuD0-U


I love these visuals! Is Unity decent to work with in terms of live audio-reactive 3d stuff? I’ve been teetering between digging deeper into TouchDesigner or Max along with GL shaders or trying out Unity.


I don’t know TD either Max (which seems kinda slow/cpu intensive for graphics, I use it for MIDI), sorry. If you can script a little bit of C# or you’re not afraid of learning it, with Unity you’d achieve results pretty fast, and there are a lot of free scripts ready to use either in the asset store or github (like for sound analysis). There are also a lot of free resources if you, like me, can’t model, like https://www.lincoln3dscans.co.uk/.

Also, the best thing is that Unity is free. Hurry before they go public ( https://variety.com/2019/gaming/news/unity-technologies-ipo-report-1203135985/ ).


Right on. Yeah Max is trickier to optimize on GPU in my limited experience but I still want to learn more.
Thanks for the info, downloading Unity now so I can give it a shot sometime!


Hello! this is my first post on the forum. I found this thread a while back and some awesome stuff on here, wanted to say thanks to @shoeg for linking those 3d scans, I actually used one for this reactive audio visual film:

I have an Instagram that I use to for short music/sound/visual stuff - its all done in touchdesigner (just the free version) and ableton, this stuff is all recordings of real time https://www.instagram.com/laurence.cleary/

Having read this thread kinda interested to try unity now but that might be biting off more than I can chew while learning TD!


this is nice!
followed :wink:


btw, finally I got a chance to record a snippet of my live set, so here’s a chance to see the reactive visuals.


loving this - just wondering, you using midi and audio out to drive the visuals in unity?


i only use the audio mix.


I apologize for bumping a decrepit thread I started a few years back, and with broken links at that, but I have been learning Blender pretty much every day since starting this thread, and have made some progress with uniting sound and visuals.

The Sonoluminescent Soul (Playlist)

If anyone would like to learn or know anything about the process behind how to turn audio wave curves into light wave curves, I could go in depth. Pretty much though in Blender, there is a simple setting to ‘Bake Audio to F-curves’, which allows you to essentially turn any animation into an audio-driven visualizer.

I just find the concept of having light and sound as one really fascinating, and it seems like an undiscovered territory to really explore and expand on.


This guy has done visuals for Lorn in the past

My favorite is this one


Also Max Cooper has crazy visuals in his videos, not quite audio reactive but stunning never the less

Visual artist is Kevin McGloughlin

And this track by Weval, holy shit