The death of robots.txt?

The death of robots.txt?

November 9, 2009 9:35 pm 2 comments

Last night I linked to an interview with Rupert Murdoch in which he says that News Corp will probably de-index their sites from Google.

I figured it was all bluster. Search engine traffic is more valuable that Murdoch suggests, and there are probably plenty of people in high places at News Corp who know it.

But Cory Doctorow suggests:

So here’s what I think it going on. Murdoch has no intention of shutting down search-engine traffic to his sites, but he’s still having lurid fantasies inspired by the momentary insanity that caused Google to pay him for the exclusive right to index MySpace (thus momentarily rendering MySpace a visionary business-move instead of a ten-minutes-behind-the-curve cash-dump).

So what he’s hoping is that a second-tier search engine like Bing or Ask (or, better yet, some search tool you’ve never heard of that just got $50MM in venture capital) will give him half a year’s operating budget in exchange for a competitive advantage over Google.

Jason Calacanis has suggested this approach as a means to “kill Google.”

But it may actually be neither the death of Google, nor the death of News Corp if they are so foolish as to carry out this plan. It could be the death of the robots exclusion standard. I would guess News Corp would use robots.txt to de-index their sites. But it’s a “purely advisory” protocol that Google is under no obligation to honor. They could continue indexing News Corps if they so choose. So could every other search engine, big or small. And I’d guess they would if big content providers started going exclusive with search engines.

If News Corps puts all its contend behind a pay wall, this point is moot – Google and other search engines won’t be able to index it, and robots.txt will be fine. But it’s something to think about.

(Hat tips to Jay Rosen for the TimesSelect link and Chris Arkenberg for the Jason Calacanis video)

2 Comments

  • In general, I find that search engines are not very adept at finding things that I am most interested in. If I’m looking for a product review, being able to Google the make + review is helpful, but if I’m trying to find more literary/philosophic information Google increasingly just comes up with a bunch of academic journal articles for sale from about 12 different sources (same article).

    Could we end up bouncing back into the era where the portal and the human edited link stream is a lot more useful than a search engine could hope to be as a result of a flooding of the market with repetitions of the same data and purchasable content?

  • Klint Finley

    I turn to other sources for certain searches – like Amazon, Yelp, and Wikipedia. Google has also been employing human editors to review SERPs. So yes, it seems we may be at the limit of non-human search results.

    Before Google got big I had taken to searching DMOZ before search engines, much the way I often check Wikipedia on a subject and look through its references and external link sections.

    All that said, I still use Google a LOT.

Leave a reply

You must be logged in to post a comment.