web programs end up better

Jeff Atwood said any application that can be written in JavaScript, will eventually be written in JavaScript. Writing Photoshop, Word, or Excel in JavaScript makes zero engineering sense, but it’s inevitable. It will happen. In fact, it’s already happening. Just look around you. As a software developer, I am happiest writing software that gets used.

He’s wrong that it makes zero engineering sense.

Apps written on the browser stack – HTML, CSS, Javascript – iterate faster than client-side apps. They start as weak approximations but end up as far better. They start behind but run faster.

For example, browser-based mail apps like Hotmail were originally much worse than client side ones like Eudora. But huge advances have been made in these apps, and now client side mail readers like Outlook are clearly lagging edge in comparison to ajax apps like Gmail. It’s not just ancient rotting hulks like Outlook; even relatively recent client-side mailers like Mail.app aren’t as good as the best web mail apps.

Any application that can be written on the web stack will eventually surpass the same application on the client-side stack.

Why do web apps interate faster? My intuition is that it’s about the size of the community of developers attacking technical obstacles. On the web you aren’t the only engineer blocked by Problem X. There are huge numbers of other engineers stuck there. All these engineers swarm the problems and knock them down. The near-instant speed of deployment of new code (especially in a shop using continuous integration) takes over and allows each incremental solution to go live. Then the transparency of web coding enables the innovation to spread: engineers discover one another’s solutions, read the source (which is always available), and copy the innovation. And this applies to virtually every aspect of the web stack, so that all of these innovations accumulate to the benefit of all web apps.

It is sometimes more technically difficult to write applications that rely on internet standards rather than client-side conventions. That creates the impression that the client-side approach is better engineering. But the client-side approach can rarely accomplish as much.

the notice-and-takedown graylist

If the owner of a copyright doesn’t care to commercially exploit their work, they won’t cover the ongoing expenses to police infringements by submitting takedown requests. There is basically a blacklist of works whose copyrights are valuable enough to cover the bill for submitting takedown requests. This blacklist is self maintaining – there doesn’t need to be a central registry.

The gray list is works which are not policed. Some are in the public domain or under a permissive license like one from Creative Commons. Some are in copyright but not being actively exploited, including orphan works whose owners can’t be reached, as well as barely-exploitable works whose owners can’t be bothered.

Anybody can find out which is which: post a given recording in a visible location and see whether you get a takedown request.