Jay, I agree that the parallel with web source code is awkward.

The ability to make a two-voice mixdown its own source code using stereo panning is self-limiting to two-voice music that can be panned this way without making the music worse. There’s a musical price to pay.

For this overall way of thinking to take off, the availability of source has to do something for the listener. Maybe it would allow for context-sensitive auto mixes, like a 3-D experience. You could literally walk around in a mix that contained its sources!

3D is a fun concept. There would be a set of sources with spatial information attached to each. Maybe this is a soundscape for a walk in the forest, so there is a praying mantis by the base of a tree, a bird up in the branches, and a stream 25 feet away. The sound at any one time and in any one location is the projection of three dimensions onto the two in which we actually hear.

If you built an experience like that, the individual tracks would have to be broken out. But that’s not music, even though I *could* see that being something in games.

Can you explain more about “the music isn’t just the sum of those components–it’s more than that. And, IMHO, the source code of the music is also more than the collection of the sources.” ?