HTML N vs local apps

Paul Kamp on HTML5 vs the App Store:

I say HTML5. App Stores are great but they will change dramatically over time to direct delivery from the developers. Developers themselves will use HTML5 so they can break the dependence on App Stores and the distribution fees associated with them.

When Apple originally released the iPhone all applications were supposed to be network based. There was a big hue and cry until Apple relented and allowed developers to develop directly for the phone.

With the evolution of technology it is time to go back to the original direction of the iPhone. The real benefit will be that they will not have to develop for any specific phone and can support any and all of them with one application.

That is the real goal of any developer.

One thing going on the background here is that the underlying technology for browser apps to compete with desktop apps is still pretty raw. HTML5 gives you local storage, but that’s a new technology. What do you use it for? Do you sync it with an Oracle backend? How do you resolve conflicts between the local data and the cloud data? Not that this kind of problem is unsolveable, but that the technology is immature.

unthinking scrollbars

Sometimes having scrollbars on a web page is a good affordance. Scroll bars on a web log that’s a series of text areas makes a lot of sense — scrolling reflects both time and text. The app and the widget go together.

But for a lot of web apps scroll bars aren’t a great affordance, and they make the site harder to use rather than easier. These current gen apps let the app saturate the entire browser window but don’t overflow it:

Web page design usually starts by assuming a scroll bar. But that overlooks the problem of how users discover what’s accessible via the scroll bar.

On Myspace the most important navigation technique is to scroll the page until you stumble on what you’re looking for. Just keep going and eventually you’ll see that bit of text or widget somewhere in the thicket of bling. Hunt in the visible page, and if you don’t see what you want expose some more of the page.

A nav bar or some other explicit navigational aide would be a lot easier and more effective. To find the comments, have a link to them *above the fold* in the first screenful. Not just comments — anything that a user might look for needs a discoverable path above the fold.

And once you’re putting all those links above the fold, what exactly is the benefit of a scrolling page? Why not move that functionality — the comment widget, the player widget — to units that aren’t loaded until the user asks for them? The original page will be lighter and faster, and users won’t have the cognitive burden of divining what the scroll bar will allow them to access.

One kind of thing that a scroll bar is the right metaphor for: more of the same. When you have a table of names, and it starts “Alice”, “Bob”, “Carol”, and the next row is hidden offscreen, then a scroll bar is a natural way to navigate. You know what’s coming when you scroll down.

But a lot of the time a scroll bar is olden days thinking, just a habit from the days when web sites were static text by default. It’s paper-oriented thinking.

decentralized sole sourcing at Holy Roar Records

Steve Gravell says:

It boils down to Artists and Labels having too much choice over where to put their music and where to call home on the web; so how about having your own site, and you can host it all over there yourself. It’s really not that hard nowadays! Let’s call it Digital DIY.

The essence of Digital DIY is that you not wrap your home on the web inside that of another. You create your very own destination. Somewhere personal. Somewhere unique. Where you live, where you store things, and hopefully where both your fans and other 3rd parties can come to find more about what you’re up to.

Can’t they use iTunes, can’t they use 7digital? can’t they use both of these and much much more? Well sure they can, and they do. Their distribution channels already push to many services such as these. But isn’t this enough? No, I don’t think it is. Why do they even have their own domain and their own website at all? Aren’t they happy putting up with only having a MySpace page, a Last.fm page, a PureVolume page, a Facebook page, a Twitter profile, and a blog on Blogger? No, I guess not. What it seems like they were looking for is a place they can call their own.

And Steve’s solution is going to be open source.

Anyhow, so, the idea is to have each person or label host their own music and then have software hook up all these different sources into a single integrated experience at the point of delivery on a third party site. Alice and Bob host their own sites with their own music, Carol invokes both of them in her site without having to rehost the MP3s. The jargon I made up for this in the “solutions” slide of my web of songs talk was “Decentralized sole sourcing”.

BTW, the “On Probation by Youves” track there is lots of fun. If Steve’s library was up and running already I’d use it to embed the track here. :-)

music web and client side remixes

Comments on web of music post

K. Prichard:

MP3 files can contain text, of course, and I’ve occasionally found lyrics stored inside TEXT and USLT frames. But there’s no consistency at all, probably never will be – more likely to find spam inside a TEXT frame.

Your idea for linking to time points is a cool notion, Lucas. Related to this, Real’s servers provide for a “start” parameter on a/v URIs, allowing one to jump to a time point, e.g.

http://play.rbn.com/?url=demnow/demnow/demand/2009/dec/audio/dn20091231.ra&proto=rtsp&start=00:28:56

Some of the various SMIL specs provide begin and end params for the same purpose (http://is.gd/5I3jL). Aside from that and Real’s faded format, my hunch is that most a/v is not very content-addressable, partly due to the fact that a given song can be found in the wild with many encoding variations. If I make in/out time points for lyrics on my rip of a CD track, your rip might not sync with it. Also, radio vs. album versions of a song may vary in duration and content.

Event-based synchronization, i.e. the beat-counting idea Piers brings up, might be worth looking into-

<a href=”example.mp3#t=1017b,1683b” class=”chorus”>chorus</a>

This would need a filter to recognize beats and count them. Possible, just not as simple as time. Might be more consistent than seconds-based.

Perhaps there’s another type of common event found in audio streams that could provide consistency, but I like drum beats because they’re less likely to get corrupted or folded than high frequencies, and less common than human voice-range freqs.

The karaoke industry seems to have cracked this nut, but I’m gonna hazard a guess that it’s all proprietary.

These guys sell player sw that syncs lyrics for 1 million songs, they claim: http://is.gd/5I48w . They appear to target music teachers in their marketing.

Piers Hollott:

When you think about it, a technological component in a media player can auto-magically beat-sync two tracks by comparing basic structure and determining BPM. Word documents used to be the bane of the structured data movement, because they trapped content in a non-structured format, but ODF and OOXML have changed that game completely, creating a new class of semi-structured data; so why not music or video?

It’s fascinating to consider that if more artists released works under CC-NC by attribution, remix artists could provide additional value by micro-tagging individual samples within the deeper structure of their compositions – particularly if this functionality were baked into the software used to assemble the composition.

In addition, isn’t the original theory behind Pandora based on linking chord progressions and such, or is it more general? I never really got a bead on what Pandora was actually doing.

It would be utterly amazing to link into music files based on high level concepts like “the 23rd through 27th beats”, “the Doobie Brothers sample”, “the I-VI-II-V section”.

I suppose you could do it in two parts. One, you’d have a semantic map of a song that was something like sheet music but much richer. It would be able to express things like “this part is a Doobie Brothers sample.” Two, you have a piece of software that applied the map to a particular rip or encoding of the song, so that the map would be applicable to all different rips/encodings

Back in the days of Real-hacking that Kev alludes to, there were experiments with mixing multiple web-accessible MP3s on the fly. For example, I found a spoken word MP3 of a sermon and put it in parallel with an instrumental DJ track. Our jargon was “client side remix.” Anyhow I did do a few experiments with indexing into MP3 files using time ranges, so that you’d be plucking out just the chorus or guitar solo or whatever. The software I tried (Real and Quicktime) was too imprecise to make this work very well. But the technique was a lot of fun.


Sorry, I realize that this post is absurdly full of jargon and shorthand. Back story:

Kev and myself and some pals once did a bunch of hacks using SMIL and RAM playlists.

There is a new standard for linking into multimedia files, called “Media Fragments URI 1.0” and still in progress.

jwheare recently posted a vision about making music on the web more webby.

pay it don’t spray it

Greg Sandoval at CNet on Spotify:

But here’s the reality about the company: Spotify managers haven’t demonstrated that they know anything more about turning users into dollars than their American counterparts. Whether Spotify will make a splash here or whether it can even produce profits at home have yet to be determined.

I think that Spotify’s emphasis on ad-sponsored music is simply wrong and out of date. I understand why they’re doing it — because it’s a natural way for a subscription streaming product to acquire customers. But other companies already went down that path and lost their shirts, and Spotify hasn’t done anything to show that they aren’t subject to the same economics.

Contrast with MOG, which limits freebies to one measly hour. Now those are cheap bastards who are damn well determined to survive. My hat is off to them.

iphone operating environment on desktops

Can’t say I’m a Daring Fireball fan. Gruber’s a decent writer but the Apple loveslave beat isn’t my thing.

But this piece about the tablet rumors, it’s more than that:

Do I think The Tablet is an e-reader? A video player? A web browser? A document viewer? It’s not a matter of or but rather and. I say it is all of these things. It’s a computer.

And so in answer to my central question, regarding why buy The Tablet if you already have an iPhone and a MacBook, my best guess is that ultimately, The Tablet is something you’ll buy instead of a MacBook.

I say they’re swinging big — redefining the experience of personal computing.

When Apple did the iPhone they reconceived what the OS means. The concept of files no longer exists at the user level, it is now submerged into the operating system, like a kernel API. Programs that you can install are strictly controlled by your operating system vendor. Some of these touches are radical but completely plausible for a desktop machine, like no keyboard, no external monitor, no phalanx of ports.

It’s natural to apply this to full scale desktops. There’s no way that nobody’s going to do that. But doing it requires a device very similar to the iPhone. Why wouldn’t that be Apple?

Is it a *good* approach to desktops? We’re about to find out.