< Back to front page Text size +

Today's Stupidly Effective Artificial Intelligence

Posted by Josh Rothman  July 12, 2012 10:41 AM

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

June 23rd marked the centenary of Alan Turing, the genius who laid some of the decisively important groundwork for the computer. That's meant a lot of articles about Turing, computers, artificial intelligence, and the Turing Test. My absolute favorite appears in the newest issue of n+1. It's called "The Stupidity of Computers," it's by the gifted polymath David Auerbach, and it offers an overview of the history of artificial intelligence which is about as "magisterial" as a short essay can be. The quest for artificial intelligence which Turing inaugurated has been largely a failure, Auerbach argues -- but it's a failure which has shaped the modern world in surprisingly pervasive ways.


The best we can do.

In the beginning, Auerbach explains, artificial intelligence research focused on language: The goal was to create a computer capable of conversation. Everyone knew that would be a challenge, of course -- but it turned out to be harder than anyone anticipated. Language is inherently ambiguous, and computers, because they must be taught how to do everything step-by-step, don't do ambiguity well. It's not just that language is full of idioms and half-expressed thoughts -- it's also that the actual relationship of words to the world is slippery. "In everyday life," he explains, "people finesse this issue. No one is too concerned with exactly how much hair a man has to lose before he is bald. If he looks bald, he is." Computers have a harder time sorting this out. From the 1990s onward, it was increasingly obvious that we wouldn't be able to give computers linguistic intelligence, and, "barring a couple of holdouts, artificial intelligence has moved on."

Since then, "end runs [have been] made around the problem of understanding human language." The best example is Google. It can't understand sentences on the webpages it indexes -- in a strict sense, it has no idea what the web is about. But it does notice the linkages between pages, and it can use them to figure out how the web is organized. Today's most intelligent computer systems arrive at a kind of 'intelligence' by noticing how data is structured and interconnected.

Ultimately, they're piggybacking on real human minds. We do all the work of linking the pages together; a system like Google picks up on the work we've already done and repurposes it. They mine data to create an "ontology" -- "an explicit, formal definition of a conceptual framework for any number of kinds of entities, as well as any number of relationships between them." These ontologies have a lot of power: think of the way Facebook can uncover hidden commonalities between people, for example. But they're only as powerful as the data we put into them, and they can have a self-reinforcing effect:

A glance at some of the “rhizomatic” maps of information in Manuel Lima’s recent Visual Complexity: Mapping Patterns of Information reveals the problem. In the political sphere, complicated charts analyzing networks of links from one political blog to another show clusters of linkages tightly within sets of “conservative” and “liberal” blogs. Another chart from a separate analysis shows clusters using a different taxonomy: progressive, independent, and conservative. Who decided on these categories? Humans. And who assigned individual blogs to each category? Again humans. So the humans decided on the categories and assigned the data to the individual categories -- then told the computers to confirm their judgments. Naturally the computers obliged.
 

Today's intelligent computer systems are extremely useful, obviously. But we should be wary of taking them too seriously -- and of the simplifying effect they might have on our own view of the world. Read the whole, excellent essay at n + 1.

This blog is not written or edited by Boston.com or the Boston Globe.
The author is solely responsible for the content.

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

 
About brainiac Brainiac is the daily blog of the Globe's Sunday Ideas section, covering news and delights from the worlds of art, science, literature, history, design, and more. You can follow us on Twitter @GlobeIdeas.
contributors
Brainiac blogger Kevin Hartnett is a writer in Columbia, South Carolina. He can be reached here.

Leon Neyfakh is the staff writer for Ideas. Amanda Katz is the deputy Ideas editor. Stephen Heuser is the Ideas editor.

Guest blogger Simon Waxman is Managing Editor of Boston Review and has written for WBUR, Alternet, McSweeney's, Jacobin, and others.

Guest blogger Elizabeth Manus is a writer living in New York City. She has been a book review editor at the Boston Phoenix, and a columnist for The New York Observer and Metro.

Guest blogger Sarah Laskow is a freelance writer and editor in New York City. She edits Smithsonian's SmartNews blog and has contributed to Salon, Good, The American Prospect, Bloomberg News, and other publications.

Guest blogger Joshua Glenn is a Boston-based writer, publisher, and freelance semiotician. He was the original Brainiac blogger, and is currently editor of the blog HiLobrow, publisher of a series of Radium Age science fiction novels, and co-author/co-editor of several books, including the story collection "Significant Objects" and the kids' field guide to life "Unbored."

Guest blogger Ruth Graham is a freelance journalist in New Hampshire, and a frequent Ideas contributor. She is a former features editor for the New York Sun, and has written for publications including Slate and the Wall Street Journal.

Joshua Rothman is a graduate student and Teaching Fellow in the Harvard English department, and an Instructor in Public Policy at the Harvard Kennedy School of Government. He teaches novels and political writing.

archives

Browse this blog

by category