Tagweb

Transgender Activists Fight Back Against Web Filters

Transgender coders at TransHack

Another one from me at Wired today:

For the transgender community, the web is an important resource for finding trans-friendly doctors, housing, jobs and public restrooms–many things the rest of us take for granted. But web filtering software designed to prevent access to pornography often stops people from accessing websites that with information on a host of other topics, such as breast feeding, safe sex and, yes, transgender issues. It’s a subtle–and possibly unintentional–form of discrimination, one that can have a big impact. Web filters are more than a temporary inconvenience for many transgender people who rely on public libraries and internet cafes to access the internet. The problem is even worse in the UK, where all new internet connections are filtered by default at the ISP level.

“Because homelessness and poverty are such a big issues in the trans community, many don’t have access to unfiltered, uncensored internet,” says Lauren Voswinkel, a transgender software developer based Pittsburgh. These hurdles to accessing information can make it even harder for transgender people to escape poverty.

That’s why she’s building Transgress, a tool that lets people bypass web filters to access sites about transgender issues and only transgender issues.

Full Story: Wired: How to Build a Kinder Web for the Transgender Community

Creepypasta: Campfire Ghost Stories for the Internet Age

Will Willes on the world of creepypasta, a genre of storytelling quickly becoming the folklore of the internet:

Again, none of these games or shows is real, but stories about them exist in truly bewildering numbers. I had unwittingly stumbled into the world of ‘creepypasta’, a widely distributed and leaderless effort to make and share scary stories; in effect, a folk literature of the web. ‘[S]ometimes,’ wrote the American author H P Lovecraft in his essay ‘Supernatural Horror in Literature’ (1927), ‘a curious streak of fancy invades an obscure corner of the very hardest head, so that no amount of rationalisation, reform, or Freudian analysis can quite annul the thrill of the chimney-corner whisper or the lonely wood.’ These days, instead of the campfire, we are gathered around the flickering light of our computer monitors, and such is the internet’s hunger for creepy stories that the stock of ‘authentic’ urban legends was exhausted long ago; now they must be manufactured, in bulk. The uncanny has been crowdsourced.

The word ‘creepypasta’ derives from ‘copypasta’, a generic term for any short piece of writing, image or video clip that is widely copy-and-pasted across forums and message boards. In its sinister variant, it flourishes on sites such as 4chan.org and Reddit, and specialised venues such as creepypasta.com and the Creepypasta Wiki (creepypasta.wikia.com), which at the time of writing has nearly 15,000 entries (these sites are all to be avoided at work). Creepypasta resembles rumour: generally it is repeated without acknowledgement of the original creator, and is cumulatively modified by many hands, existing in many versions. Even its creators might claim they heard it from someone else or found it on another site, obscuring their authorship to aid the suspension of disbelief. In the internet’s labyrinth of dead links, unattributed reproduction and misattribution lends itself well to horror: creepypasta has an eerie air of having arisen from nowhere.

Full Story: Aeon Magazine: Creepypasta is how the internet learns our fears

(Thanks Adam!)

Examples:

Polybius

Slenderman

Not safe for work: The Creepypasta Wiki

Even less safe for work: Encylopedia Dramatica’s list of the best creepypasta

Douglas Rushkoff: Abandon the Corporate Internet

facebook network

Of course the Internet was never truly free, bottom-up, decentralized, or chaotic. Yes, it may have been designed with many nodes and redundancies for it to withstand a nuclear attack, but it has always been absolutely controlled by central authorities. From its Domain Name Servers to its IP addresses, the Internet depends on highly centralized mechanisms to send our packets from one place to another.

The ease with which a Senator can make a phone call to have a website such as Wikileaks yanked from the net mirrors the ease with which an entire top-level domain, like say .ir, can be excised. And no, even if some smart people jot down the numeric ip addresses of the websites they want to see before the names are yanked, offending addresses can still be blocked by any number of cooperating government and corporate trunks, relays, and ISPs. That’s why ministers in China finally concluded (in cables released by Wikileaks, no less) that the Internet was “no threat.” […]

Back in 1984, long before the Internet even existed, many of us who wanted to network with our computers used something called FidoNet. It was a super simple way of having a network – albeit an asynchronous one.

One kid (I assume they were all kids like me, but I’m sure there were real adults doing this, too) would let his computer be used as a “server.” This just meant his parents let him have his own phone line for the modem. The rest of us would call in from our computers (one at a time, of course) upload the stuff we wanted to share and download any email that had arrived for us. Once or twice a night, the server would call some other servers in the network and see if any email had arrived for anyone with an account on his machine. Super simple.

Shareable: The Next Net

(via Disinfo)

I’ve covered how CouchDB can create a more distributed web. Also, Openet is working on creating a mesh network of mesh networks. BitCoin and Freenet are worth looking at as well.

DARPA’s working on wireless mesh networks as we speak.

My Predictions for 2011 at ReadWriteWeb

Here are my predictions for 2011:

Predictive analytics will be applied to more business processes, regardless of whether it helps.

The U.S. will add new provisions to the Anti-Counterfeiting Trade Agreement to include leaked classified information.

Despite this and other measures taken by governments and corporations, leaking will continue.

Cybersecurity hype of 2011 will dwarf that of 2010.

We’ll see more CouchApp clients for popular web services.

Almost all the big social enterprise players will have some sort of “app store” offering.

Adobe will try to acquire Joyent.

ReadWriteWeb: 2011 Predictions: Klint Finley

Explanations for each is available at the link.

More predictions from RWW staffers:

More on Decentralizing the Web: My Interview with Unhosted’s Michiel de Jong

Unhosted

I’ve followed up my interview at ReadWriteWeb with CouchOne‘s J Chris Anderson with an interview with Unhosted‘s Michiel de Jong.

de Jong takes Richard Stallman’s critiques of cloud computing seriously. But, he says, “People want to use websites instead of desktop apps. Why do they want that? I don’t think it’s up to us developers to tell users what to want. We should try to understand what they want, and give it to them.”

de Jong acknowledges the many advantages to running applications in the cloud: you can access your applications and data from any computer without installing software or transferring files. You can access your files from multiple devices without syncing. And web applications have better cross-platform support.

So how can you give users web applications while keeping them in control of their data?

The basic idea is this: an Unhosted app lives on a web server and contains only source code. That source code is executed on a user’s computer and encrypts and stores data on another server. That data never passes through the app server. Therefore, the app provider doesn’t have a monopoly on your data. And since that data is encrypted, it can’t be exploited by the data host either (or at least, it probably can’t).

The data can be hosted anywhere. “It could be in your house, it could be at your ISP or it could be at your university or workplace,” says de Jong.

“We had some hurdles to implement this, one being that the app cannot remember where your data lives, because the app only consists of source code,” he says. “Also your computer can’t remember it for you, because presumably you’re logging on to a computer you never used before.”

The Unhosted team solved the problem by putting the data location into usernames. Unhosted usernames look a lot like e-mail addresses, for example: willy@server.org. Willy is the username, server.org is location where the data is stored.

ReadWriteWeb: Unhosted: Breaking the SaaS Monopoly

The New York Times metered access plan

new york times building

(Photo by Alex Torrenegra)

Summary of my view: It’s a great idea, but executing it properly will be extremely difficult.

If you didn’t hear – The New York Times is going to “meter” access to their site. Readers will be able to view a certain number of articles per month for free, after which they’ll have to pay.

I didn’t even know about the Financial Times meter before last week when I first read rumors about the NYT takes the same approach. I occasionally read articles at FT, and have occasionally linked to articles there. Their meter gives me no trouble.

That unobtrusiveness may come at a price. It took me only one Google search to find a way to circumvent their meter – this Greasemonkey script. Apparently, they just use cookies to determine the number of articles you’ve viewed. I’m not sure how many of FT’s paying customers are going to go through the trouble of installing Firefox extensions or manually deleting cookies to get access to the site, but I’d guess it would be more of a problem for the NYT’s larger and more general audience.

So that’s where execution gets tricky. Start making the meter more effective, less easy to route around, and you’re likely to end up making it a lot more intrusive to casual readers. There’s already something of a blogger backlash against the plan, and if the meter ends up being cumbersome, the Times could find their casual readership dropping off (and their advertising revenues declining).

And that’s to say nothing of people outright pirating their articles through copy and paste. If they start trying to implement means to keep people from copying and pasting the text from their articles, they risk alienating their customers even more.

So yes, it will be tricky to pull off. With a sufficiently generous meter (20 articles a month seems reasonable), affordable access rates (I’d be great if they also had some metered plans – say 50 articles a month for $5, instead of having to buy unlimited access), unobtrusive technology, and, of course, high quality content, they could have a winning business model on their hands. (I’d also encourage them to offer free unlimited access to libraries, schools, charities, etc., as well as to visitors from developing nations.) But it will be a hard balance to pull off, especially if NYT bigwigs push for tight security and restrictions.

See also: Paid content has a good look at the ends and out of it.

A new type of search engine, from the creator of Mathematica

Type in a query for a statistic, a profile of a country or company, the average airspeed of a sparrow ? and instead of a series of results that may or may not provide the answer you’re looking for, you get a mini dossier on the subject compiled in real time that, ideally, nails the exact thing you want to know. It’s like having a squad of Cambridge mathematicians and CIA analysts inside your browser. […]

Consider a question like “How many Nobel Prize winners were born under a full moon?” Google would find the answer only if someone had previously gone through the whole list, matched the birthplace of each laureate with a table of lunar phases, and posted the results. Wolfram says his engine would have no problem doing this on the fly. “Alpha makes it easy for the typical person to answer anything quantitatively,” he asserts.

Wired: Stephen Wolfram Reveals Radical New Formula for Web Search

TOR Anonymized Content Now Available to Everyone

Aaron Swartz, one of the founders of Reddit, and Virgil Griffith, creator of WikiScanner have teamed up to provide users with a new service that gives them access to anonymized content posted through the Tor network.

Although users have always been able to publish content anonymously on Tor, the content has been available only to people who download the Tor software. Swartz wanted to free-up the content to make it available to anyone. The result is tor2web, which is essentially a kind of Google for the hidden “underweb.”

Tor is a privacy tool designed to prevent tracking of where a web user surfs on the internet and with whom a user communicates. It’s endorsed by the Electronic Frontier Foundation and other civil liberties groups as a method for whistleblowers and human-rights workers to communicate with journalists, among other uses.

Full Story: Threat Level

© 2024 Technoccult

Theme by Anders NorénUp ↑