Open form. Songs as Systems


Also had Life Of Pablo spring to mind in regards to OP’s point, or an album that’s never totally finished. Like games have updates.

I’ve discussed releasing tracks as stems with a friend of mine when we were talking about ‘whats next’. I didn’t realise (but should have guessed) that it’s already a thing. A bit niche but would be nice to mix and make dubs of a track you’ve bought. I could understand why an artist might not want to do this though.


I’m actually secretly hoping everyone starts to make music that sounds like motown again and then the world ends and we die happy


this is really interesting, i don’t know much about coding and stuff but reading this reminded me of this project:

read the brief introduction but basically the computer listens to an album 30 times and then randomly makes new tracks using samples from the songs. they have ‘re-done’ albums by The Beatles, Meshuggah, nofx & more


I’m really glad someone brought this up. Procedural generation of music is definitely a very interesting concept and, with the recent advancements and huge interest in artificial intelligence, it’s also getting a lot closer. I guess extrapolating this to the context of Life of Pablo brings about a whole new debate about what degree of control an artist should exercise over their finished work. As a result, I don’t really think proceduralism should remove an artist’s authority over what something sounds like, but can definitely be a powerful tool, especially for more conceptual art.

I generally see a huge future for it in interactive media, such as video games, where the soundtrack can be tailored to fit the player character or mood or decisions made. Naturally, this doesn’t need to be confined to sound and music only, the same sort of procedural logic can be applied to visual elements as well.

But then again, how far should this be allowed to go? If every piece of art you interact with can be tailored to you, imagine tailoring music to the tastes of some nazi asshole - you’d just end up with a version of Ultralight Beam where a perfectly synthesised Kanye voice indistinguishable from the real one is just denying the holocaust and shouting “sieg heil” over the backing track.


I was just reading a few books about the first generation of “Free Improvisers” ( Ben Watson’s bio on Derek’s Bailey, David Toop’s Into the Maelstrom, Richard Scott Free Music, Improvisation and the Avant-garde and it seems that already a lot of free improvisers thought of music as an “open-ended” process (and not as a final form). John Cage I-Ching inspired experiments/ exploring of indeterminacy can be seen also as attempts to liberate “composition” ( putting process instead of a final “stable” form). Though not music, Robert Morris’ essay Anti-form and the MOMA’s 1970 exhibition Information seem to be relevant links between action/ process/ system.


I find this stuff really interesting and did a module on generative music and the like at uni, but it scares me a bit, it’s like the one thing humans can do that (to my knowledge) AI can’t do properly yet is create from imagination. I don’t want the computers to start winning at that too hahahaha.

The open ended process similar to Kanye’s Life Of Pablo is sick but I can’t think of many artists, particularly underground/electronic artists in the current economic climate that would have the time to do this with the dramas of life in the way.


These are great references that i will search for sure.
Exactly, maybe the question today is about mass distributing this open-ended music, and if it really has potential to add value for general people or it will stay as “experiments” forever


In my mind it’s almost certain that we’ll hit a point of being able to perfectly synthesise copies of someone’s voice (Adobe already demonstrated this functionality), so I guess an interesting question to ask is who is working to ensure we’re able to tell the authenticity of audio? Should we start cryptographically signing it? This should probably be a different thread.


I would love to see a world where this concept is applied in places like restaurants, stores, airports etc. Instead of radio/streaming or a locally-hosted playlist that shuffles or repeats, hire an artist to create a generative-ish piece tailored to the space and the atmosphere they’re after–sort of like being an architect but with sound. There could be interesting applications of psychoacoustics in there, like how shop owners now sometimes play high-pitched tones to keep teens away (the tones are too high for adults to hear), but hopefully turned towards positive and non-adversarial ends.

Tools for this kind of music becoming more available and easier to use would definitely be a huge step towards getting this into the consciousness. Ableton’s session view follow actions and a lot of their MIDI effects start to gesture at these possibilities, although I don’t think many people get deep into what you can actually do. Democratization of the tools would also help with the (crucial) quest to escape the gimmick. I feel like part of escaping the gimmick is making it natural, and someone who’s not a “head” might be more likely to hit on that…


Or maybe just question if this authenticity really matters at all


This point of view is so interesting to me as an architecture student , and it reminds me to the story on how Brian Eno “invented” ambient after being @ the airport of Cologne, admiring the architecture and views, but then some generic pop music was playing thru the speakers so he thought there had to be some kind of music able to enhance an space in the background without wanting to draw the attention. So in a way ambient music had an architectural dimension from the beginning.


Most intresting topic of the forum rn


I can remeber patten talking about song/live as an never ending open forme, so more as a stream that a file.


I can think of some interesting examples. Beck released an album of sheet music and let people record their own versions a few years back. OPN released the MIDI for Garden of Delete and I believe is packaging a compliation of songs people have made with them.

It would be interesting to see a modern version of an Orchestrion (player piano with other instruments as well) - an agreed upon set of tools for which musicians can program songs. Then at home people could fiddle with tempo, key, etc.


I think if u think about music as the product of a community or a scene rather than the product of individualized single people, then there’s some ways in which this situation already exists.

Dub culture, remixing, edits/bootlegs/whitelabels, sampling, etc provide a ton of different ways for the life of a song to evolve and transform. I can’t think of an example off the top of my head, but think of a particular song where the version that everybody knows turns out to actually be a remix of the original track. Then which one is the “real version”?

Or think of a particular genre or subgenre that is formed from a particular set of samples or elements. If I show you 3 tracks by 3 diff artists, and each track uses the same drum break sample, same bass timbre, and maj7th chord stabs - what is the relationship between those songs really? are they totally unique individual creations? are they different versions of the same song? maybe thats just a semantic difference

Dont want to go too political, but I think the only way forward for that sort of amorphous creation is in a future where intellectual property is less of a commodity and the idea of an “artist” isn’t so tied to a person’s livelihood and all the individualized branding that comes with that.

Internet memes might be a good example of this sort of thing. No one can really claim ownership of a meme. We just all kind of collectively produce, reproduce and remix them


what @snakehead, @ebb, and @idlestate recapitulated w/r/t how things like jazz, ambient, remixes already achieve in abstracting the ‘being’ of the piece is what first came to mind for me and i agree. especially with jazz, which is harder to envision without understanding the amount of constant touring they did with so many variations of their songs

i find the first thing mentioned by @idlestate in the op post about questioning the aesthetics of a square album cover, or of the sort, to be particularly interesting. breaking barriers with the aesthetics, release format, etc are the things that have been and will continue to develop significantly (especially with the renassaince of vr approaching mainstream levels soon)

that being said, i think musically (in both present and much of the past), the ‘being’ of music has broken contingency from expectations or logistical qualifications for a while. i just dont believe its promoted as much as it should due to the monotony of music journalism (which should be bringing the ideas to form): arbitrary genre mapping and subjective feelings

im very excited, and perhaps a bit scared, of what the future holds for VR technology to be integrated into the abstracts of sound on a large scale like were discussing. a final point to mention is inevitably, mainstream tropes only thrive because our human engineering is attuned to being excited by specific things. ie high frequency sounds promote impact and transition, bass sounds give subtle but obligatory attractiveness, songs catching rhythym automatically have a base appeal etc. so i dont think well ever see a major overhaul renassaince across the board for media until our thoughts transcend to hyperspace or s/e


also this dadabots thing is awesome. the meshuggah one makes them sound 1000% better


interesting how if you go by the grammy’s definition, a Cd/Lp/mp3 contains a ‘record’ but cannot admit a ‘song’


Massive Attack - Mezzanine x Fantom
“The user experience, co-designed by the band’s Robert Del Naja triggers new mix patches using the stems of classic tracks such as ‘Teardrop’, ‘Angel’ and ‘Inertia Creeps’ using iPhone sensor functions such as camera, microphone, acceleration and location for sampling, recording and unique playback”

Very interesting project towards the “softwarification” of the music format.