THIS STORY HAS BEEN FORMATTED FOR EASY PRINTING

David Rumelhart; created perception simulations; at 68

David Rumelhart devised an algorithm that allowed computer programs to learn how to perceive.
David Rumelhart devised an algorithm that allowed computer programs to learn how to perceive.
By Benedict Carey
New York Times / March 28, 2011

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

Your article has been sent.

Text size +

NEW YORK — David E. Rumelhart, whose computer simulations of perception gave scientists some of the first testable models of neural processing and proved helpful in the development of machine learning and artificial intelligence, died March 13 in Chelsea, Mich. He was 68.

The cause was complications of Pick’s disease, an Alzheimer’s-like disorder he had for more than a decade, his son Karl said.

When Dr. Rumelhart, a psychologist, began thinking in the 1960s about how neurons process information, the field was split into two camps that had little common language: biologists, who focused on neurons and brain tissue; and cognitive psychologists, who studied more abstract processes, like reasoning skills and learning strategies.

By starting small — showing, for instance, that the brain’s ability to recognize a single letter was greatly influenced by the letters around it — Dr. Rumelhart and his colleague Jay McClelland, around 1980, built computer programs that roughly simulated perception. Later, he devised an algorithm that allowed computer programs to learn how to perceive.

Using his program, a computer could interpret underwater sonar signals with roughly the accuracy that a person could. It was an important early step in machine learning, a critical component in artificial intelligence.

Working at the University of California, San Diego, he eventually developed a simulation of how three or more layers of neurons could work together to process information — as is required for the brain to engage in any complex task, like reading. Previous models were far cruder. In a landmark 1986 paper, written with Geoffrey Hinton and Ronald Williams for the journal Nature, he described how the system worked.

McClelland, director of the Center for Mind, Brain and Computation at Stanford, said the neural processing work “led to extremely powerful systems for doing things like visual object recognition and handwritten character classification.’’ In 1986, he and Dr. Rumelhart wrote a book, “Parallel Distributed Processing,’’ that became a central text in the field.

In the work, they argued that language, like most knowledge, relies mainly on memory and is represented in the brain by sets of associations between elements of sound and meaning.

This put them in opposition with scientists who argue that the brain generates some words by using rules shaped in part by brain biology — for example, adding “-ed’’ to a stem to form a past tense.

“Rumelhart was enormously important in the 1980s in reviving this neural network approach to language and cognition,’’ said Steven Pinker, a psychologist at Harvard and a leading proponent of the rival “rules’’ theory.

Even though the two men sometimes disagreed, Pinker said that Dr. Rumelhart’s computer simulations “prompted me and many others to ask very fruitful questions, and that in the end is about all a good scientist can ask for.’’