songs as starting points (spirit rappings #2)

Conversation on the Spirit Rappings post developed around the mix having the guitar and vocal parts hard panned to left and right so you can pull out my singing and do karaoke. Jay Fienberg characterized this as a way of releasing your music as much as sources for starting something new as end destinations.

I explained the idea:

The story behind the tracking left and right to enable remixing and karaoke is that I’m thinking about ways for songs to contain their own source code, so that every listenable object can easily be disassembled into parts.

The model is the way that web pages always reveal the HTML, CSS and Javascript that they are made out of. This led to fast uptake of ideas and evolution of techniques as developers cherry picked the best ideas for their own creations, which were themselves available for cherry picking. In the end the web as a whole became a freakishly productive and innovative environment.

Why try to do this with music? Because the long view of musical trends that I’m getting by digging through historical archives is making me aware of the way that music evolves by cherry picking, and this is making me want to structure the musical environment to promote cherry picking.

Even though the change would be structural, the impact would be in the music itself. Weak hooks would disappear from the flow within a generation or two, strong ones would be an even bigger part of the landscape. Better arrangements would be used as skeletons for new work. And the kind of ugly horribleness that the inbreeding of commercial pop culture gives us would be wiped out faster than a race of mules.

Jay replied with a comment that I didn’t get at first:

I think the parallel you’re drawing with web source code both works and is problematic at the same time.

It works in that one can talk about both web pages and musical works as being made up of objective component parts. But, more or less, the web objects are objectively objects, and the music “objects” or only subjectively so.

Although there are different components that go into making a musical work, (unlike a web page) the music isn’t just the sum of those components–it’s more than that. And, IMHO, the “source code” of the music is also more than the collection of the sources.

I agreed that

The parallel with web source code is awkward.

The ability to make a two-voice mixdown its own source code using stereo panning is self-limiting to two-voice music that can be panned this way without making the music worse. There’s a musical price to pay.

And then Jay explained:

When I saw a presentation by the authors of Recording the Beatles (amazing book, btw), they played excerpts from the recent 5.1 surround mixes of the Beatles. Those mixes often had 1-2 instruments or voice panned to a speaker, and this allows one to listen to individual parts in isolation, and hear a lot more how they were recorded as well as other sounds in the studio that were otherwise buried in the original mono / stereo mixes.

I mention this just as another example of multichannel mixes allowing a different way of getting into the music–there’s definitely something to be said for this approach!

***

I might also look at what you’ve done with this song as simultaneously releasing three versions:

1. the song you hear when you play both channels at once

2. the song you hear when you play the left channel only

3. the song you hear when you play the right channel only

The fact that these versions are all in one file means different things to different audiences–to a listener on an iPod, it’s maybe inconvenient to switch between the versions; to a musician with a multitrack system, maybe it’s a convenient format to work with, etc.

But (and this gets to your question), part of what’s happening is that you are deciding on some part of your music to be the component “atoms”–and this is either arbitrary or an artistic decision, or somewhere in between. And that decision (or arbitrariness) is something people experience as listeners and/or as musicians who can build on your work.

For example, why not record every guitar string on its own track? Or, separate notes above middle C on one track, and notes below on another? Or, make each bar of a piece it’s own song?

There are a lot of ways to listen to and build upon music in component terms, and those ways are overlapping and simultaneously valid starting points for both experiencing the music and for building new / different musics.

As a musician, you give people some starting points that represent your perspective and process–but then others find their own starting points themselves, as listeners or players.

In this way, I’d see music as embodying potentials more on the order of the web (links) than of web pages (code). The source code of your music is ultimately the “links,” not just the tracks.

So, one way I’d look at what you are doing is helping people get into your music at a different level where they might discover or make new links. And, mostly what I am saying is that, with music, there are a lot of different, overlapping, levels that can work this way.

This goes to the relationship between hypertext in the abstract and Hypertext Markup Language (HTML) in particular. Hypertext maps the world of meaning to a navigable space. Let’s say you had three books with one sentence apiece:

Book 1: I like apples.
Book 2: I like oranges.
Book 3: She hates apples.

Hypertext would enable navigation from Book 1 to Book 2 via the shared concept “I like”, and between Book 1 and Book 3 via the shared concept “apples”. Linkable similarities wouldn’t be limited to such specific features, though. In the abstract there would be a link for every possible layer of meaning that was shared between documents. They are all in English; they are all three words; they are all grammatically correct; two are in the first person; they are all in the form subject-action-object; and on and on. There are an infinite number of link structures from any input objects.

Jay again:

Why not record every guitar string on its own track? Or, separate notes above middle C on one track, and notes below on another? Or, make each bar of a piece it’s own song?

Releasing songs as their raw multitrack sources would carry this idea to its practical extreme. Every sample and every track would be preserved in the best possible detail. And why not? It’s true that these would be very big files, but bandwidth and disk space keep getting better. The blocker would be getting music players to do mixdown at play time, since they would have to know how to support the new file formats for raw multitrack recordings.

3 thoughts on “songs as starting points (spirit rappings #2)

  1. The raw multitrack sources for my musical output over the last year are on the order of 25 gigs total. It’d all easily fit on any current iPod-like device or be inexpensive to store and serve up from Amazon S3 or Dreamhost.

    It’s absolutely practical to now release many versions and raw sources of music online–it’s in many respects simpler to release 25 gigs of raw audio sources online than it is to get 650 mb of that onto a CD that is shipped to people.

    But, at what point are we just talking about recorded sound objects vs music? Not that I think there is a big distinction that needs to be made in absolute terms, but rather in any specific relationship between music creator and listener (or, co-creator).

    There is an art to the “release” of music, which reflects the process of curating, editing, aggregating, sequencing, packaging etc., as well as the relationship with the music’s potential audiences.

    And, this gets back to what I was saying before about what you choose to be the “atoms” of your music–you are choosing a certain kind of release, either as an expression of your art, or for arbitrary reasons, or for both. And that choice both reflects and influence how people hear it–e.g., is it just a bunch of recordings or is it music? Is it an album or a song? Is there one version or are there many?

    I see my own recorded music as creating musical instruments that other people play. I think everyone’s recorded music really functions in this way, but I definitely feel this way about my own. Everyone (who listens to or plays the music) makes it into their own music when they play it. And, with my own, I am excited by the possibility that some people will find creative and interactive ways to play it beyond just the songs passively showing up in the shuffle on iTunes. (But, even in the passive case, the music itself is interactive and can become your own–can change into something new and personal to you.)

    ***

    btw, FLAC supports up to 8 channels. But, besides the disk space / file size, digital devices that process multiple channel audio need faster processors and more memory than 2 channel devices. Still, devices like the H4 show that iPod-like devices already can do a lot with hi-fi multitrack audio.

  2. This resonated for me from Jay’s comment:

    “I see my own recorded music as creating musical instruments that other people play. I think everyone’s recorded music really functions in this way, but I definitely feel this way about my own”.

    The corollary is not hard to reach nor controversial: I love the use of my own and others’ available sound clips as samples for manipulation and processing.

    In an earlier time, one had to worry about concepts like “plunderphonics” to realize the possibilities in appropriation of sound. That idea seems more quaint than revolutionary now.

    With Creative Commons and public domain sources, the whole paradigm shifts. I can go to the Freesound Project or the mixter or librivox or netlabels which permit sampling and snap up a recording of this or that. I can then sequence it through my 25 dollar softsynth and create something new. The sound is not just an instrument, but also a string, or a motif, or a loop, or even an indescribable discordant pad. The customary definitions are merely touchstones, old-technology concepts inadequate to describe the starchild of possibility inherent in captured open source sound.

    When sound manipulation offers so many possibilities–most of which are accessible via use of freeware or inexpensive shareware–then the “buy my record, worship me, make me a star” thing eventually fades away into some obscure past. Collaboration and exploration step in and create arguably fewer fankids and groupies, and more pioneers and innovators.

    Generations removed from peoples’ tastes tried to create a rarified form of music appreciation, accessible to only a chosen few. But now, the experience of being bathed in the possibility of manipulated sound creates huge niches of listeners no longer bound by the old conventions of how they “must” or “should” make music. Instead, new ways of experiencing music and sound can arise and evolve with quantum software-release speed.

    I can take Lucas’ voice, and make it into a monastic drone. I can take his guitar and make it into a warm blur of gorgeous echo. Yet the fun begins when the next remixter takes what I create, and turns it into something new and unexpected. It’s no longer arty condescension to make some abstract point. It’s a swimming pool of sound, remixed and reveled within, and the water is just fine. That’s the possibility in open source music, and, like the myth of salvation, it’s available to all.

Mentions

  • songs as instruments (spirit rappings #3) « the Wordpress of Lucas Gonze

Leave a Reply

Your email address will not be published. Required fields are marked *