Tagmachine learning

Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES

[“Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues” was originally published in Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.
The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3US]

[Image of an eye in a light-skinned face; the iris and pupil have been replaced with a green neutral-faced emoji; by Stu Jones via CJ Sorg on Flickr / Creative Commons]

Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Continue reading

A Conversation With Klint Finley About AI and Ethics

I spoke with Klint Finley, known to this parish, over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, yesterday, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are increasingly willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, above.

Computer Chips That Work Like a Brain Are Coming — Just Not Yet

I wrote for Wired about computer chips designed specifically for building neural networks:

Qualcomm is now preparing a line of computer chips that mimic the brain. Eventually, the chips could be used to power Siri or Google Now-style digital assistants, control robotic limbs, or pilot self-driving cars and autonomous drones, says Qualcomm director of product management Samir Kumar.

But don’t get too excited yet. The New York Times reported this week that Qualcomm plans to release a version of the chips in the coming year, and though that’s true, we won’t see any real hardware anytime soon. “We are going to be looking for a very small selection of partners to whom we’d make our hardware architecture available,” Kumar explains. “But it will just be an emulation of the architecture, not the chips themselves.”

Qualcomm calls the chips, which were first announced back in October, Zeroth, after the Isaac Asimov’s zeroth law of robotics: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The Zeroth chips are based on a new architecture that radically departs from the architectures that have dominated computing for the past few decades. Instead, it mimics the structure of the human brain, which consists of billions of cells called neurons that work in tandem. Kumar explains that although the human brain does its processing much more slowly than digital computers, it’s able to complete certain types of calculations much more quickly and efficiently than a standard computer, because it can do many calculations at once.

Even the world’s largest supercomputers are able to use “only” one million processing cores at a time.

Full Story: Wired: Computer Chips That Work Like a Brain Are Coming — Just Not Yet

See also:

Palm Pilot Inventor Wants to Open Source the Human Brain

Yorkshire Pigs Control Computer Gear With Brain Waves

If You Plug Twitter Into a Digital Avatar, Can You Live Forever?

New article from me at Wired:

In one episode of Black Mirror — the British television series that explores the near future of technology with an edginess reminiscent of The Twilight Zone — a woman’s husband dies, and she replaces him with a robot.

This walking automaton looks like him and talks like him, and it even acts like him, after plugging into his Twitter account and analyzing every tweet he ever sent.

Yes, that’s a far cry from reality, but it’s not as far as you might think. With an online service called Lifenaut, an operation called the Terasem Movement Foundation offers a means of digitally cloning yourself through a series of personality tests and data from your social media profiles. The idea is to create an online version of you that can live forever, a digital avatar that even future generations can talk to and interact with. Eventually, Terasem wants to transform these avatars into walking, talking robots — just like on Black Mirror. And today, it provides a more primitive version, for free. […]

But Dale Carrico, a lecturer in the Department of Rhetoric at the University of California at Berkeley, is skeptical. To say the least. He says that the folks at Terasem and other “transhumanists” — those who believe the human body can be radically enhanced or even transcended entirely through technology — are pursing pipe dreams. He doesn’t even give them create for trying. “The trying is evidence only of the depth of their misunderstanding, not of their worthy diligence,” he says. Simply put, an avatar isn’t a person — in any meaningful sense.

Full Story: Wired Enterprise: If You Plug Twitter Into a Digital Avatar, Can You Live Forever?

My avatar is embedded in the story so you can chat with it.

Twitter ‘Joke Bots’ Shame Human Sense of Humor

@horse_ebooks

My latest for Wired:

One of the funniest people on Twitter isn’t a person at all. It’s a bot called @Horse_ebooks.

It was originally built as a promotional vehicle for a series of digital books, but in the years since it has developed a life of its own — not to mention a sizable cult following. Its tweets range from the cryptic (“I Will Make Certain You Never Buy Knives Again”) to the bizarre (“No flow of bile to speak of. later. later. later. later. later. later. later. later. later. later. later. later. later. later.”).

OK, it’s no George Carlin, but it’s funnier than most of the Twitter one-liners from your friends and family — and it’s not even trying. It’s randomly grabbing text from e-books and websites.

Hackers Darius Kazemi and Joel McCoy believe there’s a larger point to be made here. For years, people have worked to build machines with a sense of humor — researchers at the University of Edinburgh recently created a program that can actually learn from its past jokes — but Kazemi and McCoy believe these academics are working too hard. Since most people aren’t that funny, the two hackers say, why not replace everyday humor with remarkably simple bots that spew boilerplate phrases over Twitter?

Full Story: Wired: Twitter ‘Joke Bots’ Shame Human Sense of Humor

Photo by Lindsay Eyink

Scientists Plan To Upload Bee Consciousness To Robots

A bee

George Dvorsky writes:

A new project has been announced in which scientists at the Universities of Sheffield and Sussex are hoping to create the first accurate computer simulation of a honey bee brain — and then upload it into an autonomous flying robot.

This is obviously a huge win for science — but it could also save the world. The researchers hope a robotic insect could supplement or replace the shrinking population of honey bees that pollinate essential plant life.

io9: New project aims to upload a honey bee’s brain into a flying insectobot by 2015

Previously: Can You Imagine a Future Where London Police Bees Conduct Genetic Surveillance?

Photo by Steve Jurvetson / CC

Coders Can’t Put Writers Out Of A Job Yet, But We’d Better Watch Our Backs


Screenshot from Current, see Ethan Zuckerman’s post for an explanation

I wrote for TechCrunch about the way automation and machine learning algorithms may start putting writers out of jobs:

Discovering news stories is actually the business that Narrative Science wants to get into, according to Wired, and CTO Kristian Hammond believes finding more stories will actually create more jobs for journalists. I’m not so sure. It will depend on a few things, like how much more efficient writers can be made through technology and how much risk publishers will take on “unproven” story ideas vs. safe computer generated ideas. The idea behind Current was that it could help publishers find lucrative stories to run to subsidize more substantial reporting. Of course publications will continue to run original, differentiating human written reporting. But the amount resources dedicated to that sort of content may change, depending on the economics of automation.

And the possibilities get weirder. Look at drone journalism. Today drones, if they are used at all, are just used to extend journalists capabilities, not to make us more efficient or replace us. But how could drones change, say, event or travel coverage in coming years? Will one reporter with a suitcase full of drones and a server full of AI algorithms do the work of three?

TechCrunch: Coders Can’t Put Writers Out Of A Job Yet, But We’d Better Watch Our Backs

Previously: DARPA Training Computers to Write Dossiers

DARPA Has Seen the Future of Computing … And It’s Analog

DARPA UPSIDE analog processors

By definition, a computer is a machine that processes and stores data as ones and zeroes. But the U.S. Department of Defense wants to tear up that definition and start from scratch.

Through its Defense Advanced Research Projects Agency (Darpa), the DoD is funding a new program called UPSIDE, short for Unconventional Processing of Signals for Intelligent Data Exploitation. Basically, the program will investigate a brand-new way of doing computing without the digital processors that have come to define computing as we know it.

The aim is to build computer chips that are a whole lot more power-efficient than today’s processors — even if they make mistakes every now and then.

The way Darpa sees it, today’s computers — especially those used by mobile spy cameras in drones and helicopters that have to do a lot of image processing — are starting to hit a dead end. The problem isn’t processing. It’s power, says Daniel Hammerstrom, the Darpa program manager behind UPSIDE. And it’s been brewing for more than a decade.

Full Story: Wired Enterprise: Darpa Has Seen the Future of Computing … And It’s Analog

DARPA Training Computers to Write Dossiers

DARPA is trying to put me out of a job:

They look a bit like communally written Wikipedia pages. But these articles—concise profiles of people and organizations, complete with lists of connected organizations, people, and events—were in fact written by computers, in a new bid by the Pentagon to build machines that can follow global news events and provide intelligence analysts with useful summaries in close to real time. […]

On the new site, if you search for information on the Nigerian jihadist movement Boko Haram, you get this entirely computer-generated summary: “Founded by Mohammed Yusuf in 2002, Boko Haram is led by Ibrahim Abubakar Shekau. (Former leaders include Mohammed Yusuf.) It has headquarters in Maiduguri. It has been described as ‘a new radical fundamentalist sect,’ ‘the main anchor for mayhem in the state,’ ‘a fractured sect with no clear structure,’ and ‘the misguided extremist sect.’ “

Lucky for me:

The profile of Barack Obama, for example, correctly identifies him as the president of the United States, but then summarizes him this way: “Obama has been described as ‘Nobel Peace Prize winner,’ ‘the only reasonable guy in the room,’ ‘an anti-apartheid campus divestment activist,’ and ‘the most trusted politician in the CR-poll.’ ”

At another point it notes, “Obama is married to Michelle LaVaughn Robinson Obama; other family members include Henry Healy, Malia Obama, and Ann Dunham.” (Healy is a distant Obama cousin from Moneygall, Ireland. Obama’s younger daughter, Sasha, isn’t mentioned.)

The system lacks real-world knowledge that would help a human analyst recognize something as false, humorous, or plainly irrelevant.

MIT Technology Review: An Online Encyclopedia that Writes Itself

Yes, it’s a far cry from replacing your favorite non-fiction writers, but the possibility that this sort of thing could start to cut into the total number of paid writing and editing positions in the next few years is starting to get real.

See also: Can an Algorithm Write a Better News Story Than a Human Reporter?

Free Online Artificial Intelligence Course from Stanford

I just did a brief post at ReadWriteWeb on the free online artificial intelligence class at Stanford:

The course will be taught by Sebastian Thrun and Peter Norvig. The course will include online lectures by the two, and according to the course website both professors will be available for online discussions. And according to the video embedded below, students in the online class will be graded on a curve just like regular Stanford students and receive a certificate of completion with their grade.

ReadWriteWeb: Take Stanford’s AI Course For Free Online

One of the interesting things here is that you can, for the most part, get the full education of the course. You just don’t get the course credit. But maybe students at other universities could take the class and then test out of their own school’s AI course? What impact would it have on professors if universities accepted certificates like this to count towards credit toward a degree at their school?

John Robb has speculated that an Ivy League education could be provided for $20 a month. Andrew McAfee has asked what a higher education bust would actually look like. One possibility is that thousands of professors get laid off as a smaller number of more prestigious professors can teach larger numbers of students via the Internet.

You might also be interested in this collection of free lectures from the Stanford Human Behavioral Biology course (via Dr. Benway). And of course, there’s always The Khan Academy.

© 2024 Technoccult

Theme by Anders NorénUp ↑