releasing your music as much as sources for starting something new as end destinations.
I explained the idea:
The story behind the tracking left and right to enable remixing and karaoke is that I’m thinking about ways for songs to contain their own source code, so that every listenable object can easily be disassembled into parts.
Why try to do this with music? Because the long view of musical trends that I’m getting by digging through historical archives is making me aware of the way that music evolves by cherry picking, and this is making me want to structure the musical environment to promote cherry picking.
Even though the change would be structural, the impact would be in the music itself. Weak hooks would disappear from the flow within a generation or two, strong ones would be an even bigger part of the landscape. Better arrangements would be used as skeletons for new work. And the kind of ugly horribleness that the inbreeding of commercial pop culture gives us would be wiped out faster than a race of mules.
Jay replied with a comment that I didn’t get at first:
I think the parallel you’re drawing with web source code both works and is problematic at the same time.
It works in that one can talk about both web pages and musical works as being made up of objective component parts. But, more or less, the web objects are objectively objects, and the music “objects” or only subjectively so.
Although there are different components that go into making a musical work, (unlike a web page) the music isn’t just the sum of those components–it’s more than that. And, IMHO, the “source code” of the music is also more than the collection of the sources.
I agreed that
The parallel with web source code is awkward.
The ability to make a two-voice mixdown its own source code using stereo panning is self-limiting to two-voice music that can be panned this way without making the music worse. There’s a musical price to pay.
And then Jay explained:
When I saw a presentation by the authors of Recording the Beatles (amazing book, btw), they played excerpts from the recent 5.1 surround mixes of the Beatles. Those mixes often had 1-2 instruments or voice panned to a speaker, and this allows one to listen to individual parts in isolation, and hear a lot more how they were recorded as well as other sounds in the studio that were otherwise buried in the original mono / stereo mixes.
I mention this just as another example of multichannel mixes allowing a different way of getting into the music–there’s definitely something to be said for this approach!
I might also look at what you’ve done with this song as simultaneously releasing three versions:
1. the song you hear when you play both channels at once
2. the song you hear when you play the left channel only
3. the song you hear when you play the right channel only
The fact that these versions are all in one file means different things to different audiences–to a listener on an iPod, it’s maybe inconvenient to switch between the versions; to a musician with a multitrack system, maybe it’s a convenient format to work with, etc.
But (and this gets to your question), part of what’s happening is that you are deciding on some part of your music to be the component “atoms”–and this is either arbitrary or an artistic decision, or somewhere in between. And that decision (or arbitrariness) is something people experience as listeners and/or as musicians who can build on your work.
For example, why not record every guitar string on its own track? Or, separate notes above middle C on one track, and notes below on another? Or, make each bar of a piece it’s own song?
There are a lot of ways to listen to and build upon music in component terms, and those ways are overlapping and simultaneously valid starting points for both experiencing the music and for building new / different musics.
As a musician, you give people some starting points that represent your perspective and process–but then others find their own starting points themselves, as listeners or players.
In this way, I’d see music as embodying potentials more on the order of the web (links) than of web pages (code). The source code of your music is ultimately the “links,” not just the tracks.
So, one way I’d look at what you are doing is helping people get into your music at a different level where they might discover or make new links. And, mostly what I am saying is that, with music, there are a lot of different, overlapping, levels that can work this way.
This goes to the relationship between hypertext in the abstract and Hypertext Markup Language (HTML) in particular. Hypertext maps the world of meaning to a navigable space. Let’s say you had three books with one sentence apiece:
Book 1: I like apples.
Book 2: I like oranges.
Book 3: She hates apples.
Hypertext would enable navigation from Book 1 to Book 2 via the shared concept “I like”, and between Book 1 and Book 3 via the shared concept “apples”. Linkable similarities wouldn’t be limited to such specific features, though. In the abstract there would be a link for every possible layer of meaning that was shared between documents. They are all in English; they are all three words; they are all grammatically correct; two are in the first person; they are all in the form subject-action-object; and on and on. There are an infinite number of link structures from any input objects.
Why not record every guitar string on its own track? Or, separate notes above middle C on one track, and notes below on another? Or, make each bar of a piece it’s own song?
Releasing songs as their raw multitrack sources would carry this idea to its practical extreme. Every sample and every track would be preserved in the best possible detail. And why not? It’s true that these would be very big files, but bandwidth and disk space keep getting better. The blocker would be getting music players to do mixdown at play time, since they would have to know how to support the new file formats for raw multitrack recordings.