Tagartificial intelligence

Computer Chips That Work Like a Brain Are Coming — Just Not Yet

I wrote for Wired about computer chips designed specifically for building neural networks:

Qualcomm is now preparing a line of computer chips that mimic the brain. Eventually, the chips could be used to power Siri or Google Now-style digital assistants, control robotic limbs, or pilot self-driving cars and autonomous drones, says Qualcomm director of product management Samir Kumar.

But don’t get too excited yet. The New York Times reported this week that Qualcomm plans to release a version of the chips in the coming year, and though that’s true, we won’t see any real hardware anytime soon. “We are going to be looking for a very small selection of partners to whom we’d make our hardware architecture available,” Kumar explains. “But it will just be an emulation of the architecture, not the chips themselves.”

Qualcomm calls the chips, which were first announced back in October, Zeroth, after the Isaac Asimov’s zeroth law of robotics: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The Zeroth chips are based on a new architecture that radically departs from the architectures that have dominated computing for the past few decades. Instead, it mimics the structure of the human brain, which consists of billions of cells called neurons that work in tandem. Kumar explains that although the human brain does its processing much more slowly than digital computers, it’s able to complete certain types of calculations much more quickly and efficiently than a standard computer, because it can do many calculations at once.

Even the world’s largest supercomputers are able to use “only” one million processing cores at a time.

Full Story: Wired: Computer Chips That Work Like a Brain Are Coming — Just Not Yet

See also:

Palm Pilot Inventor Wants to Open Source the Human Brain

Yorkshire Pigs Control Computer Gear With Brain Waves

Coders Can’t Put Writers Out Of A Job Yet, But We’d Better Watch Our Backs


Screenshot from Current, see Ethan Zuckerman’s post for an explanation

I wrote for TechCrunch about the way automation and machine learning algorithms may start putting writers out of jobs:

Discovering news stories is actually the business that Narrative Science wants to get into, according to Wired, and CTO Kristian Hammond believes finding more stories will actually create more jobs for journalists. I’m not so sure. It will depend on a few things, like how much more efficient writers can be made through technology and how much risk publishers will take on “unproven” story ideas vs. safe computer generated ideas. The idea behind Current was that it could help publishers find lucrative stories to run to subsidize more substantial reporting. Of course publications will continue to run original, differentiating human written reporting. But the amount resources dedicated to that sort of content may change, depending on the economics of automation.

And the possibilities get weirder. Look at drone journalism. Today drones, if they are used at all, are just used to extend journalists capabilities, not to make us more efficient or replace us. But how could drones change, say, event or travel coverage in coming years? Will one reporter with a suitcase full of drones and a server full of AI algorithms do the work of three?

TechCrunch: Coders Can’t Put Writers Out Of A Job Yet, But We’d Better Watch Our Backs

Previously: DARPA Training Computers to Write Dossiers

DARPA Training Computers to Write Dossiers

DARPA is trying to put me out of a job:

They look a bit like communally written Wikipedia pages. But these articles—concise profiles of people and organizations, complete with lists of connected organizations, people, and events—were in fact written by computers, in a new bid by the Pentagon to build machines that can follow global news events and provide intelligence analysts with useful summaries in close to real time. [...]

On the new site, if you search for information on the Nigerian jihadist movement Boko Haram, you get this entirely computer-generated summary: “Founded by Mohammed Yusuf in 2002, Boko Haram is led by Ibrahim Abubakar Shekau. (Former leaders include Mohammed Yusuf.) It has headquarters in Maiduguri. It has been described as ‘a new radical fundamentalist sect,’ ‘the main anchor for mayhem in the state,’ ‘a fractured sect with no clear structure,’ and ‘the misguided extremist sect.’ “

Lucky for me:

The profile of Barack Obama, for example, correctly identifies him as the president of the United States, but then summarizes him this way: “Obama has been described as ‘Nobel Peace Prize winner,’ ‘the only reasonable guy in the room,’ ‘an anti-apartheid campus divestment activist,’ and ‘the most trusted politician in the CR-poll.’ ”

At another point it notes, “Obama is married to Michelle LaVaughn Robinson Obama; other family members include Henry Healy, Malia Obama, and Ann Dunham.” (Healy is a distant Obama cousin from Moneygall, Ireland. Obama’s younger daughter, Sasha, isn’t mentioned.)

The system lacks real-world knowledge that would help a human analyst recognize something as false, humorous, or plainly irrelevant.

MIT Technology Review: An Online Encyclopedia that Writes Itself

Yes, it’s a far cry from replacing your favorite non-fiction writers, but the possibility that this sort of thing could start to cut into the total number of paid writing and editing positions in the next few years is starting to get real.

See also: Can an Algorithm Write a Better News Story Than a Human Reporter?

Free Online Artificial Intelligence Course from Stanford

I just did a brief post at ReadWriteWeb on the free online artificial intelligence class at Stanford:

The course will be taught by Sebastian Thrun and Peter Norvig. The course will include online lectures by the two, and according to the course website both professors will be available for online discussions. And according to the video embedded below, students in the online class will be graded on a curve just like regular Stanford students and receive a certificate of completion with their grade.

ReadWriteWeb: Take Stanford’s AI Course For Free Online

One of the interesting things here is that you can, for the most part, get the full education of the course. You just don’t get the course credit. But maybe students at other universities could take the class and then test out of their own school’s AI course? What impact would it have on professors if universities accepted certificates like this to count towards credit toward a degree at their school?

John Robb has speculated that an Ivy League education could be provided for $20 a month. Andrew McAfee has asked what a higher education bust would actually look like. One possibility is that thousands of professors get laid off as a smaller number of more prestigious professors can teach larger numbers of students via the Internet.

You might also be interested in this collection of free lectures from the Stanford Human Behavioral Biology course (via Dr. Benway). And of course, there’s always The Khan Academy.

Robots, Automation and the Future of Work

This is a presentation by Marshall Brain, founder of How Stuff Works. He’s written more extensively on the subject in an essay called Robotic Nation, which I haven’t read yet.

I think Brain might be overestimating the ability of machine-vision and natural language processing to supplant human intelligence, but the general trend towards fewer and fewer jobs is real one that I’ve written about a lot lately.

(via Justin Pickard)

A Treasure Trove for Autodidacts

dissecting a circle

Trevor Blake sent me this:

References & Resources for LessWrong

LessWrong is “community blog devoted to refining the art of human rationality.” I’ve occasionally dipped into the blog, but never made much of a habit of it. But this reference page is excellent – the section on mathematics seems particularly useful. There are sections on artificial intelligence, machine learning, game theory, computer science, philosophy and more.

And via that resource page are two other amazing resources:

Khan Academy: A massive collection of free self-paced math and science lessons.

Better Explained: a site that, y’know, explains stuff. Like calculus.

3 Best University Majors According to Microsoft

Artificial intelligence

These are the areas of concentration Microsoft is most in need of right now, according to its jobs blog:

Data Mining/Machine Learning/AI/Natural Language Processing

Business Intelligence/Competitive Intelligence

Analytics/Statistics – specifically Web Analytics, A/B Testing and statistical analysis

Microsoft Careers Jobs Blog: The Top Three hottest new majors for a career in technology

No surprises there. See “The Coming Data Explosion” for more on the subject of big data.

(via Don)

Update: See also: The Big Data Explosion and the Demand for the Statistical Tools to Analyze It “If The Graduate were remade today, the advice to young Benjamin Braddock might be ‘just one word… statistics.’”

Making brains: Reverse engineering the human brain to achieve AI

Brain

An introduction to the concepts and problems with reverse engineering the human brain:

The ongoing debate between PZ Myers and Ray Kurzweil about reverse engineering the human brain is fairly representative of the same debate that’s been going in futurist circles for quite some time now. And as the Myers/Kurzweil conversation attests, there is little consensus on the best way for us to achieve human-equivalent AI.

That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil’s upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

Sentient Developments: Making brains: Reverse engineering the human brain to achieve AI

A Grand Unified Theory of Artificial Intelligence

the thinker

Early AI researchers saw thinking as logical inference: if you know that birds can fly and are told that the waxwing is a bird, you can infer that waxwings can fly. One of AI’s first projects was the development of a mathematical language — much like a computer language — in which researchers could encode assertions like “birds can fly” and “waxwings are birds.” If the language was rigorous enough, computer algorithms would be able to comb through assertions written in it and calculate all the logically valid inferences. Once they’d developed such languages, AI researchers started using them to encode lots of commonsense assertions, which they stored in huge databases.

The problem with this approach is, roughly speaking, that not all birds can fly. And among birds that can’t fly, there’s a distinction between a robin in a cage and a robin with a broken wing, and another distinction between any kind of robin and a penguin. The mathematical languages that the early AI researchers developed were flexible enough to represent such conceptual distinctions, but writing down all the distinctions necessary for even the most rudimentary cognitive tasks proved much harder than anticipated.

Embracing uncertainty

In probabilistic AI, by contrast, a computer is fed lots of examples of something — like pictures of birds — and is left to infer, on its own, what those examples have in common. This approach works fairly well with concrete concepts like “bird,” but it has trouble with more abstract concepts — for example, flight, a capacity shared by birds, helicopters, kites and superheroes. You could show a probabilistic system lots of pictures of things in flight, but even if it figured out what they all had in common, it would be very likely to misidentify clouds, or the sun, or the antennas on top of buildings as instances of flight. And even flight is a concrete concept compared to, say, “grammar,” or “motherhood.”

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

PhysOrg: A Grand Unified Theory of Artificial Intelligence

(Thanks Josh!)

Your Computer Really Is a Part of You

Heidegger schematic

The findings come from a deceptively simple study of people using a computer mouse rigged to malfunction. The resulting disruption in attention wasn’t superficial. It seemingly extended to the very roots of cognition.

“The person and the various parts of their brain and the mouse and the monitor are so tightly intertwined that they’re just one thing,” said Anthony Chemero, a cognitive scientist at Franklin & Marshall College. “The tool isn’t separate from you. It’s part of you.”

Chemero’s experiment, published March 9 in Public Library of Science, was designed to test one of Heidegger’s fundamental concepts: that people don’t notice familiar, functional tools, but instead “see through” them to a task at hand, for precisely the same reasons that one doesn’t think of one’s fingers while tying shoelaces. The tools are us.

This idea, called “ready-to-hand,” has influenced artificial intelligence and cognitive science research, but without being directly tested.

Wired Science: Your Computer Really Is a Part of You

(via Cole Tucker)

© 2014 Technoccult

Theme by Anders NorenUp ↑