Open form. Songs as Systems


I’m gonna try to start kind of a debate here because this is a topic that i find very interesting and im sure a lot of the people that orbit around this forum can have good inputs to it.

One day i was drunk with some friends and we were talking about what could be a big leap in music in the near future, when today basically any sound and timbre can be already achieved by anyone with a laptop (or even a phone).

I think that maybe the big change is not in the sound anymore, but in the format or structure of music itself, rethinking what a song or an album has to be. Maybe towards making more “open-form” music, maybe leaning on the “generative” or algorithmic side that makes music more near to the category of a program instead of a rendered or finished soundwave, where some parts may be fixed while others change depending on some conditions. But having this systems as part of the song itself and not only as a way to generate ideas for it.

We are dragging with us a lot of conventions that come from obsolete formats and may have to be rethought, like… Why in the streaming era does music artwork still have to be a square? In the same way, since recording was possible we’ve assumed that a song is most of the time a soundwave that is exactly the same every time you play it, but that wasn’t that true before (or still isn’t today in live music that has some level of improvisation). Nowadays we have the means for making music with much more degrees of freedom or variability , also interaction with the listener so he becomes an active part.

Maybe it is a matter of the tools that allow to make stuff like that maturing and becoming easier, some kind of standard format/platform happening so not every of these songs is an app. Of course the biggest problem is that every new technique is doomed to become a gimmick and focus too much on the technique itself. This ideas have been reserved to more research or intellectual circles, but Id like to see more mainstream experiments in this direction that use the “softwareness (fuck?)” as a mean to enhance the intention of the song and not as an end.

I wrote that way too fast and my english got a bit rusty so sorry in advance. But for clarification, im not talking specifically about algorithm-made music specifically but i wonder if you think that there is going to be a general shift in music from something “definitive and renderized” to something more open-form so a song is not only music information but coding too. Should we question what a “song” means?


One thing I found extremely interesting was when Kayne West’s The Life of Pablo was released as streaming only at first, he kept tweaking and uploading different versions of the songs. I don’t read that as a “statement of the temporal nature of streaming music” or anything of the like (more of just a reflection of his perfectionist nature), but there’s definitely some interesting experiments to be done there that weren’t possible with physical releases.

I could see someone setting up the ability to “vote” on the favorite versions of songs, then perhaps buy a “one off” physical copy of that album, customized for them. Many other possibilities as well.


I’ll add that i think that one part of this being successful is not trying to always take it to the extreme. It doesn’t have to be all an infinite generative song from the sounds to the notes.

For example, the same way we may enjoy jamming over a song we like, i can imagine something like a “song” where all the tracks are rendered just as usual except one, with some kind of interface the listener can play it, with some restrictions (quantized to scale and timings?) so it always sounds good.

But again, this would have to escape the gimmick , you have to like the song in the first place, otherwise it would be just a curiosity that you would do one time and then forget about it.


This feels like a logical extension of something like NI’s Stems format?


Also had Life Of Pablo spring to mind in regards to OP’s point, or an album that’s never totally finished. Like games have updates.

I’ve discussed releasing tracks as stems with a friend of mine when we were talking about ‘whats next’. I didn’t realise (but should have guessed) that it’s already a thing. A bit niche but would be nice to mix and make dubs of a track you’ve bought. I could understand why an artist might not want to do this though.


I’m actually secretly hoping everyone starts to make music that sounds like motown again and then the world ends and we die happy


this is really interesting, i don’t know much about coding and stuff but reading this reminded me of this project:

read the brief introduction but basically the computer listens to an album 30 times and then randomly makes new tracks using samples from the songs. they have ‘re-done’ albums by The Beatles, Meshuggah, nofx & more


I’m really glad someone brought this up. Procedural generation of music is definitely a very interesting concept and, with the recent advancements and huge interest in artificial intelligence, it’s also getting a lot closer. I guess extrapolating this to the context of Life of Pablo brings about a whole new debate about what degree of control an artist should exercise over their finished work. As a result, I don’t really think proceduralism should remove an artist’s authority over what something sounds like, but can definitely be a powerful tool, especially for more conceptual art.

I generally see a huge future for it in interactive media, such as video games, where the soundtrack can be tailored to fit the player character or mood or decisions made. Naturally, this doesn’t need to be confined to sound and music only, the same sort of procedural logic can be applied to visual elements as well.

But then again, how far should this be allowed to go? If every piece of art you interact with can be tailored to you, imagine tailoring music to the tastes of some nazi asshole - you’d just end up with a version of Ultralight Beam where a perfectly synthesised Kanye voice indistinguishable from the real one is just denying the holocaust and shouting “sieg heil” over the backing track.


I was just reading a few books about the first generation of “Free Improvisers” ( Ben Watson’s bio on Derek’s Bailey, David Toop’s Into the Maelstrom, Richard Scott Free Music, Improvisation and the Avant-garde and it seems that already a lot of free improvisers thought of music as an “open-ended” process (and not as a final form). John Cage I-Ching inspired experiments/ exploring of indeterminacy can be seen also as attempts to liberate “composition” ( putting process instead of a final “stable” form). Though not music, Robert Morris’ essay Anti-form and the MOMA’s 1970 exhibition Information seem to be relevant links between action/ process/ system.


I find this stuff really interesting and did a module on generative music and the like at uni, but it scares me a bit, it’s like the one thing humans can do that (to my knowledge) AI can’t do properly yet is create from imagination. I don’t want the computers to start winning at that too hahahaha.

The open ended process similar to Kanye’s Life Of Pablo is sick but I can’t think of many artists, particularly underground/electronic artists in the current economic climate that would have the time to do this with the dramas of life in the way.


These are great references that i will search for sure.
Exactly, maybe the question today is about mass distributing this open-ended music, and if it really has potential to add value for general people or it will stay as “experiments” forever


In my mind it’s almost certain that we’ll hit a point of being able to perfectly synthesise copies of someone’s voice (Adobe already demonstrated this functionality), so I guess an interesting question to ask is who is working to ensure we’re able to tell the authenticity of audio? Should we start cryptographically signing it? This should probably be a different thread.


I would love to see a world where this concept is applied in places like restaurants, stores, airports etc. Instead of radio/streaming or a locally-hosted playlist that shuffles or repeats, hire an artist to create a generative-ish piece tailored to the space and the atmosphere they’re after–sort of like being an architect but with sound. There could be interesting applications of psychoacoustics in there, like how shop owners now sometimes play high-pitched tones to keep teens away (the tones are too high for adults to hear), but hopefully turned towards positive and non-adversarial ends.

Tools for this kind of music becoming more available and easier to use would definitely be a huge step towards getting this into the consciousness. Ableton’s session view follow actions and a lot of their MIDI effects start to gesture at these possibilities, although I don’t think many people get deep into what you can actually do. Democratization of the tools would also help with the (crucial) quest to escape the gimmick. I feel like part of escaping the gimmick is making it natural, and someone who’s not a “head” might be more likely to hit on that…


Or maybe just question if this authenticity really matters at all


This point of view is so interesting to me as an architecture student , and it reminds me to the story on how Brian Eno “invented” ambient after being @ the airport of Cologne, admiring the architecture and views, but then some generic pop music was playing thru the speakers so he thought there had to be some kind of music able to enhance an space in the background without wanting to draw the attention. So in a way ambient music had an architectural dimension from the beginning.


Most intresting topic of the forum rn


I can remeber patten talking about song/live as an never ending open forme, so more as a stream that a file.


I can think of some interesting examples. Beck released an album of sheet music and let people record their own versions a few years back. OPN released the MIDI for Garden of Delete and I believe is packaging a compliation of songs people have made with them.

It would be interesting to see a modern version of an Orchestrion (player piano with other instruments as well) - an agreed upon set of tools for which musicians can program songs. Then at home people could fiddle with tempo, key, etc.


I think if u think about music as the product of a community or a scene rather than the product of individualized single people, then there’s some ways in which this situation already exists.

Dub culture, remixing, edits/bootlegs/whitelabels, sampling, etc provide a ton of different ways for the life of a song to evolve and transform. I can’t think of an example off the top of my head, but think of a particular song where the version that everybody knows turns out to actually be a remix of the original track. Then which one is the “real version”?

Or think of a particular genre or subgenre that is formed from a particular set of samples or elements. If I show you 3 tracks by 3 diff artists, and each track uses the same drum break sample, same bass timbre, and maj7th chord stabs - what is the relationship between those songs really? are they totally unique individual creations? are they different versions of the same song? maybe thats just a semantic difference

Dont want to go too political, but I think the only way forward for that sort of amorphous creation is in a future where intellectual property is less of a commodity and the idea of an “artist” isn’t so tied to a person’s livelihood and all the individualized branding that comes with that.

Internet memes might be a good example of this sort of thing. No one can really claim ownership of a meme. We just all kind of collectively produce, reproduce and remix them


what @snakehead, @ebb, and @idlestate recapitulated w/r/t how things like jazz, ambient, remixes already achieve in abstracting the ‘being’ of the piece is what first came to mind for me and i agree. especially with jazz, which is harder to envision without understanding the amount of constant touring they did with so many variations of their songs

i find the first thing mentioned by @idlestate in the op post about questioning the aesthetics of a square album cover, or of the sort, to be particularly interesting. breaking barriers with the aesthetics, release format, etc are the things that have been and will continue to develop significantly (especially with the renassaince of vr approaching mainstream levels soon)

that being said, i think musically (in both present and much of the past), the ‘being’ of music has broken contingency from expectations or logistical qualifications for a while. i just dont believe its promoted as much as it should due to the monotony of music journalism (which should be bringing the ideas to form): arbitrary genre mapping and subjective feelings

im very excited, and perhaps a bit scared, of what the future holds for VR technology to be integrated into the abstracts of sound on a large scale like were discussing. a final point to mention is inevitably, mainstream tropes only thrive because our human engineering is attuned to being excited by specific things. ie high frequency sounds promote impact and transition, bass sounds give subtle but obligatory attractiveness, songs catching rhythym automatically have a base appeal etc. so i dont think well ever see a major overhaul renassaince across the board for media until our thoughts transcend to hyperspace or s/e