Why the Web Isn’t Dead – A Few More Points

At the risk of beating a dead horse and becoming bona-fide member of the slow media, I want to make a few more points about that recent Wired cover story. Some of this may seem like semantics or nit-picking, but I think the details here are important for understanding what is and isn’t happening to the web. (My previous thoughts are here).

What are the defining features of the web?

“It’s driven primarily by the rise of the iPhone model of mobile computing, and it’s a world Google can’t crawl, one where HTML doesn’t rule.”

Anderson keeps mentioning HTML, HTTP, and port 80 as the key features of the web. I don’t think that’s the case. Quite a lot of apps for iOS, Android and Adobe AIR are built using HTML and, presumably, access data using HTTP over port 80. Even apps that aren’t just glorified shortcuts to a company’s web site (TweetDeck, feed readers, and Instapaper are good examples of great apps that change how we consume web content) don’t seem far off from the typical web experience – they’re just custom browsers, still using the same old ports and protocols. (I could be wrong about those specific apps, but the tendency remains.) What’s really happening is that the browser is becoming invisible – it’s becoming the OS. Which is what web people have been saying would happen all along.

But what, at its core, is “the web”? To me it’s about hypertext – the ability to link and be linked. Interconnectedness. So apps can either be walled gardens – with no way to link or be linked to – or they can incorporate links. If it’s the former, then they’re no longer part of the web. If it’s the latter – isn’t it still the web?

It might not be this way forever, but the New York Times iOS app (at least as it runs on my iPod Touch) has outbound links (which open within the NYT app), and the ability to e-mail, text, Tweet or copy permalinks to the stories you read in the app (but if you open the links from your e-mail on your iOS device, they open in Safari and not in the NYT app). So even if it’s not using HTML, HTTP and port 80 (and I’m pretty sure it actually is), it’s still providing a rich hypertext experience. It’s still, all in all, the web.

Facebook is searchable by Google

“It’s a world Google can’t crawl, one where HTML doesn’t rule.”

It should also be noted that Facebook is searchable by Google now. So are Twitter, Tumblr, and most other big name social media sites. Mobile and desktop apps aren’t, but again – most of the apps there are still pulling content from or pushing content to the open web, where it’s being crawled by Google. Facebook has been pushing to make profiles public specifically to court more search engine traffic. Certainly there’s a lot of data that Facebook generates that it holds onto itself – all that data that’s going into its Open Graph project. That’s how it generates value. But it’s still an ad-supported system that depends on getting targeted traffic – and search seems to be a part of its strategy.

It may also be worth noting that The New York Times shut down its previous walled garden experiment in order to get more search traffic. The current semi-permeable wall idea is designed in part to encourage search traffic and link sharing.

Of course others, like the Wall Street Journal and the Financial Times, both of which have had semi-permiable pay-walls, are going the opposite direction. So it remains to be seen which model will win. It seems likely that pay-walls will work for some content but not for others. It’s hard to imagine the Wired article in question getting so much traction and generating so much debate in a world of walled off, stand alone apps with no links.

The link is still the currency of social media

“Facebook became a parallel world to the Web, an experience that was vastly different and arguably more fulfilling and compelling and that consumed the time previously spent idly drifting from site to site.”

I’m not sure this is entirely true either. What exactly do people do on Facebook? A lot of different stuff, but one of those things is sharing links. The same is true on other social media sites. “Giving good link,” as Jay Rosen calls it, is still the best way to be popular on Twitter. Links – whether to articles, videos, or whatever – are still what generate activity on social media sites. True you can do more and more within Facebook without ever having to refer out to any external content, but it’s hard to imagine the value of the link diminishing enough for it to vanish from the social media ecosystem altogether any time soon.

The social graph only goes so far

Social media is a key way to find new links, but it’s not the only way and isn’t always the best way. Some of the “search is dead” sorts of articles that have been floating around about Google lately seem to believe that you can replace search with your “social graph.” You just ask your friends “Hey, where’s a good place to get a smoothie around here?” or “What kind of cell phone should I buy?” and you get your answer.

But that just isn’t the reality of the situation. I recently bought a Samsung Vibrant. If I’d been depending on my “social graph” I’d never have bought it since no one I knew had one. I had to depend on search engines to find reviews. I did an experiment the other day – I asked if anyone had an ASUS UL30A-X5 or knew someone who had one. This laptop wasn’t as new a product as the Vibrant. Also, it was part of the line of laptops Engadget called laptop of the year in 2009. So it seemed plausible that in my network Twitter followers and Facebook friends (over 1,000 people combined), including lots and lots of geeks and tech savvy people, SOMEONE would either have one or know someone who did. But no one did. Or if they did, they didn’t say anything.

And consumer electronics are a relatively un-obscure interest of mine. If I’d asked my social graph if they knew of any essays comparing Giotto’s Allegories of the Vices and the Virtues to the tarot, would anyone have been able to point me towards this essay? Maybe, but sometimes it’s easier to to just fucking Google it.

Don’t get me wrong, I get a lot of answers through my friends via social media. But it’s not a replacement for Google. (And while there might not be much room for Google to grow its search business, it’s far from irrelevant.)

How open has the Internet ever been?

First it was getting listed by Yahoo!, then it was getting a good ranking in Google, now it’s getting into the Apple App Store. In each case, the platform owner benefited more than the person trying to get listed. This is not new. That certain sites – like Facebook at YouTube – have become large platforms is certainly interesting. That Apple, Facebook and Google have a disproportionate say over what gets seen on the Internet is problematic, definitely. But there was never any golden age when the Net was truly open. The physical infrastructure is owned by giant corporations, and ICANN is loosely controlled by the US government. And the biggest threat to openness on the Internet is international agreement that has nothing to do with the shift to apps.

Furthermore, even the App Store is open in a certain sense. It’s important to remember that Apple didn’t invent the app store – or even the mobile app store. They’ve been around for quite a while. I had a plain non-smart phone on Verizon that had access to an app store. Part of what made Apple’s app store successful though is that anyone could buy the SDK and submit apps to it. You didn’t have to be invited, and the cost wasn’t prohibitive. Very few developers could develop apps for that old Verizon store. In that sense, the app store is extremely “open.”

What do we need to do to ensure the app and post-app ecologies are “open”?

Even if we are going to see the end of the Open Web, replaced instead by an app economy or later an object ecosystem, we don’t need to have a closed Internet. Here are some of the keys to an open future:

-Open Data
-Open APIs
-Data Portability
-Net Neutrality
-Disclosure of data collection and usage
-Open-source apps and objects
An independent Internet

4 Comments

  1. Your implication above that the web is synonymous with hypertext is something that some people would vehemently disagree with. I would make the same argument, more or less: the web began as a hack demo — a dumbed down version of a more complex project Tim Berners-Lee had worked on that lacked some of the web’s flaws, and was itself a dumbed-down version of the various and sundry hypertext projects floating around in the late eighties and early nineties — and it never really progressed past that level of development. It grew, but it never really matured or solved any of its inherent design flaws.

    While I agree that the arguments for the death of the web in the wired article are total bunk, I don’t really agree with the idea that the web won’t die any time soon, nor do I think it would be necessarily a bad thing for the web to be overtaken and replaced with something that lacks things like broken links, hierarchical naming structures, and entirely hidden revision histories. Even wikis have taken many of the ideas common amongst pre-web hypertext implementations and made them commonly available, and it has been demonstrated that they are feasible in more distributed systems.

    What the supposed move towards non-browser internet-using applications suggested by Wired means for people like me is that there is the opportunity for a move away from the web and towards something that works much better. Whether the move is or is not legitimate is immaterial — the article has planted the idea in the minds of people, and that gives us opportunity to exploit it and make it manifest.

  2. Klint Finley

    August 24, 2010 at 1:49 pm

    Not all hypertext is the Web, but the Web is all about hypertext – and as long as I can link to the content I’m reading, and follow links from it, then it’s still the Web.

    I’m familiar with Ted Nelson – a visionary to be sure (ComputerLib is great), but Xanadu is a several decades old piece of vaporware and the Web is real.

    I’m also not convinced that broken links is versioning is a problem. Something certainly benefit from having consistent links and transparent revision processes – and those things can use wikis or other platforms to accomplish that. That’s the beauty of dumb systems.

    I’m not saying the web can or should never be supplanted by something better – maybe in a few years blogging will seem as antiquated as maintaining Gopher archives. (I hear that kids these days aren’t big fans of blogs – but OTOH, even during the height of newspapers’ dominance kids probably didn’t read those until they “grew up” so that might not be indicative of the medium’s future.)

  3. Klint Finley

    September 14, 2010 at 3:22 pm

    It may seem like beating a, uh, dead horse at this point, but:

  4. particularly great website, i am going to notify all my pals about this and bookmark it in my webbrowser

Comments are closed.

© 2024 Technoccult

Theme by Anders NorénUp ↑