Tagsearch engines

Search Engine Algorithms Used to Analyze Brain Activity

eigenvector centrality

In this case, the fact that the brain – just like the internet – is a network with “small world properties” helps. Every pixel in the brain and every Internet page can be seen as a hub in this network. The hubs can be directly connected to each other just as two Internet pages can be linked.

With eigenvector centrality, the hubs are assessed based on the type and quality of their connections to other hubs. On the one hand, it is important how many connections a particular node has, and on the other, the connections of the neighbouring nodes are also significant. Search engines like Google use this principle, meaning that Internet sites linked to frequently visited sites, like Wikipedia, for example, appear higher in results than web pages which don’t have good connections.
“The advantages of analyzing fMRI results with eigenvector centrality are obvious,” says Gabriele Lohmann from the Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig. The method views the connections of the brain regions collectively and is computationally efficient. Therefore, it is ideal for detecting brain activity reflecting the states that subjects are in.

PhysOrg: The network in our heads: What our brains have in common with the internet

(via Social Physicist)

Freenet, darknets, and the “deep web”

Installing the software takes barely a couple of minutes and requires minimal computer skills. You find the Freenet website, read a few terse instructions, and answer a few questions (“How much security do you need?” … “NORMAL: I live in a relatively free country” or “MAXIMUM: I intend to access information that could get me arrested, imprisoned, or worse”). Then you enter a previously hidden online world. In utilitarian type and bald capsule descriptions, an official Freenet index lists the hundreds of “freesites” available: “Iran News”, “Horny Kate”, “The Terrorist’s Handbook: A practical guide to explosives and other things of interests to terrorists”, “How To Spot A Pedophile [sic]”, “Freenet Warez Portal: The source for pirate copies of books, games, movies, music, software, TV series and more”, “Arson Around With Auntie: A how-to guide on arson attacks for animal rights activists”. There is material written in Russian, Spanish, Dutch, Polish and Italian. There is English-language material from America and Thailand, from Argentina and Japan. There are disconcerting blogs (“Welcome to my first Freenet site. I’m not here because of kiddie porn … [but] I might post some images of naked women”) and legally dubious political revelations. There is all the teeming life of the everyday internet, but rendered a little stranger and more intense. One of the Freenet bloggers sums up the difference: “If you’re reading this now, then you’re on the darkweb.”

Guardian: The dark side of the internet

(via Atom Jack)

I haven’t looked at Freenet in years, but it’s certain relevant to the discussion here about darknets.

The death of robots.txt?

Last night I linked to an interview with Rupert Murdoch in which he says that News Corp will probably de-index their sites from Google.

I figured it was all bluster. Search engine traffic is more valuable that Murdoch suggests, and there are probably plenty of people in high places at News Corp who know it.

But Cory Doctorow suggests:

So here’s what I think it going on. Murdoch has no intention of shutting down search-engine traffic to his sites, but he’s still having lurid fantasies inspired by the momentary insanity that caused Google to pay him for the exclusive right to index MySpace (thus momentarily rendering MySpace a visionary business-move instead of a ten-minutes-behind-the-curve cash-dump).

So what he’s hoping is that a second-tier search engine like Bing or Ask (or, better yet, some search tool you’ve never heard of that just got $50MM in venture capital) will give him half a year’s operating budget in exchange for a competitive advantage over Google.

Jason Calacanis has suggested this approach as a means to “kill Google.”

But it may actually be neither the death of Google, nor the death of News Corp if they are so foolish as to carry out this plan. It could be the death of the robots exclusion standard. I would guess News Corp would use robots.txt to de-index their sites. But it’s a “purely advisory” protocol that Google is under no obligation to honor. They could continue indexing News Corps if they so choose. So could every other search engine, big or small. And I’d guess they would if big content providers started going exclusive with search engines.

If News Corps puts all its contend behind a pay wall, this point is moot – Google and other search engines won’t be able to index it, and robots.txt will be fine. But it’s something to think about.

(Hat tips to Jay Rosen for the TimesSelect link and Chris Arkenberg for the Jason Calacanis video)

Murdoch: We’ll probably remove our sites from Google’s index

Rupert Murdoch has suggested that News Corporation is likely to make its content unfindable to users on Google when it launches its paid content starategy .

When Murdoch and other senior News Corp lieutenants have criticised aggregators such as Google for taking a free ride on its content, commentators have questioned why the company doesn’t simply make its content invisible to search engines.

Using the robots.txt protocol on a site indicates to automated web spiders such as Google’s not to index that particular page or to serve up lionks to it in users’ search results.

Murodch claimed that readers who randomly reach a page via search have little value to advertisers. Asked by Sky News political editor David Speers why News hasn’t therefore made its sites invisible to Google, Murdoch replied: “I think we will.”

Mumbrella: Murdoch: We’ll probably remove our sites from Google’s index

(via Jay Rosen)

I’d be quite happy to see News Corps shoot themselves in the foot, but I have the feeling people who actually know what they are talking about will stop this from happening.

A new type of search engine, from the creator of Mathematica

Type in a query for a statistic, a profile of a country or company, the average airspeed of a sparrow ? and instead of a series of results that may or may not provide the answer you’re looking for, you get a mini dossier on the subject compiled in real time that, ideally, nails the exact thing you want to know. It’s like having a squad of Cambridge mathematicians and CIA analysts inside your browser. […]

Consider a question like “How many Nobel Prize winners were born under a full moon?” Google would find the answer only if someone had previously gone through the whole list, matched the birthplace of each laureate with a table of lunar phases, and posted the results. Wolfram says his engine would have no problem doing this on the fly. “Alpha makes it easy for the typical person to answer anything quantitatively,” he asserts.

Wired: Stephen Wolfram Reveals Radical New Formula for Web Search

© 2025 Technoccult

Theme by Anders NorénUp ↑