The bright side of wrong

Our tendency to err is also what makes us smart. Here's what we'd gain from embracing it

(istockphoto/Globe Staff illustration )
By Kathryn Schulz
June 13, 2010

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

Your article has been sent.

  • E-mail|
  • Print|
  • Reprints|
  • |
Text size +

There are certain things in life that pretty much everyone can be counted on to despise. Bedbugs, say. Back pain. The RMV. Then there’s an experience we find so embarrassing, agonizing, and infuriating that it puts all of those to shame. This is, of course, the experience of being wrong.

Is there anything at once so routine and so loathed as the revelation that we were mistaken? Like the exam that’s returned to us covered in red ink, being wrong makes us cringe and slouch down in our seats. It makes our hearts sink and our dander rise.

Sometimes we hate being wrong because of the consequences. Mistakes can cost us time and money, expose us to danger or inflict harm on others, and erode the trust extended to us by our community. Yet even when we are wrong about completely trivial matters — when we mispronounce a word, mistake our neighbor Emily for our co-worker Anne, make the dinner reservation for Tuesday instead of Thursday — we often respond with embarrassment, irritation, defensiveness, denial, and blame. Deep down, it is wrongness itself that we hate.

Being wrong, we feel, signals something terrible about us. The Italian cognitive scientist Massimo Piattelli-Palmarini summed up this sentiment nicely. We err, he wrote, because of “inattention, distraction, lack of interest, poor preparation, genuine stupidity, timidity, braggadocio, emotional imbalance,...ideological, racial, social or chauvinistic prejudices, as well as aggressive or prevaricatory instincts.” In this view — and it is the common one — our errors are evidence of our gravest social, intellectual, and moral failings.

Of all the things we’re wrong about, this view of error might well top the list. As ashamed as we may feel of our mistakes, they are not a byproduct of all that’s worst about being human. On the contrary: They’re a byproduct of all that’s best about us. We don’t get things wrong because we are uninformed and lazy and stupid and evil. We get things wrong because we get things right. The more scientists understand about cognitive functioning, the more it becomes clear that our capacity to err is utterly inextricable from what makes the human brain so swift, adaptable, and intelligent.

Misunderstanding our mistakes in this way — seeing them as evidence of flaws and an indictment of our overall worth — exacts a steep toll on us, in private and public life alike. Doing so encourages us to deny our own errors and despise ourselves for making them. It permits us to treat those we regard as wrong with condescension or cruelty. It encourages us to make business and political leaders of those who refuse to entertain the possibility that they are mistaken. And it impedes our efforts to prevent errors in domains, such as medicine and aviation, where we truly cannot afford to get things wrong.

If we hope to avoid those outcomes, we need to stop treating errors like the bedbugs of the intellect — an appalling and embarrassing nuisance we try to pretend out of existence. What’s called for is a new way of thinking about wrongness, one that recognizes that our fallibility is part and parcel of our brilliance. If we can achieve that, we will be better able to avoid our costliest mistakes, own up to those we make, and reduce the conflict in our lives by dealing more openly and generously with both other people’s errors and our own.

To change how we think about wrongness, we must start by understanding how we get things right.

Try filling in the following blank: “The giraffe had a very long ____.”

You can answer that question in a flash, and so can my 4-year-old neighbor. Yet a computer — a machine that can calculate pi out to a thousand digits while you sneeze — would be completely stymied by it. Long after you’ve moved on from the giraffe and finished the sports section and gone for a walk, the computer would still be frantically spitting out ideas to fill in that blank. Maybe the giraffe had a very long...tongue? Flight from Kenya? History of drug abuse? Paralyzed by so many potentially right answers, the computer would struggle to generate any answer at all.

Humans, by contrast, have no trouble answering this question, because we don’t care about what’s potentially right. We care about what’s probably right, based on whatever we’ve experienced in the past. That’s why 4-year-olds can guess right on this question, despite their comparatively limited experience with sentences and (one assumes) giraffes.

This guessing strategy is known as inductive reasoning, and it makes us right about vastly more (and more important) things than giraffes. You use inductive reasoning when you hear a strange noise in your house at 3 a.m. and call the cops; when your left arm throbs and you go to the emergency room; when you spot your spouse’s migraine medicine on the table and immediately turn on the coffee, turn off the TV, and hustle your tantrumming toddler out of the house. In situations like these, we don’t hang around trying to compile bulletproof evidence for our beliefs — because we don’t need to. Thanks to inductive reasoning, we are able to form nearly instantaneous beliefs and take action accordingly.

Psychologists and neuroscientists increasingly think that inductive reasoning undergirds virtually all of human cognition — the decisions you make every day, as well as how you learned almost everything you know about the world. To take just the most sweeping examples, you used inductive reasoning to learn language, organize the world into meaningful categories, and grasp the relationship between cause and effect in the physical, biological, and psychological realms.

But this intelligence comes at a cost: Our entire cognitive operating system is fundamentally, unavoidably fallible. The distinctive thing about inductive reasoning is that it generates conclusions that aren’t necessarily true. They are, instead, probabilistically true — which means they are possibly false. Because we reason inductively, we will sometimes get things wrong.

For example, consider the role of inductive reasoning in learning language. If you are a native English speaker, you figured out within the first several years of your life that you should add the suffix -ed to form a past-tense verb. This was a brilliant guess. It’s largely correct, it taught you a huge number of words in one fell swoop, and it was a lot less painful than separately memorizing the past tense of every verb in the English language. But it also meant that, sooner or later, you said things like “drinked” and “thinked” and “runned.” You got a huge number of things right, at the price of getting a certain number of things wrong.

The problem, of course, is that inductive reasoning can also lead us into errors far more costly than grammatical mistakes. Consider an elderly patient who goes to the ER after working in the garden and feeling a sharp pain in his hip. The doctor who treats him has seen this kind of thing before. She diagnoses musculoskeletal pain and sends the patient home with anti-inflammatories. Later that night, the patient is back, and in trouble: It turns out that the hip pain was caused by blood pooling out of an abdominal aortic aneurism — a rare condition that is nearly always fatal if untreated. Was the doctor an idiot? No. She was acting in accordance with both nature (our capacity for inductive reasoning) and training. As doctors say, “If you hear hoofbeats, don’t look for zebras.” Like all of us, doctors function on the rational conviction that rare conditions are precisely that: rare.

As that story suggests, the fact that our mistakes arise from a fabulously successful cognitive system does not mean they are always harmless. It’s one thing to go from “None of those verbs had a strange ending” to “drinked,” and something else entirely to go from “None of those patients had a strange disease” to nearly killing this patient. And here we arrive at the paradox of error: If we want to prevent it, we must understand that it is an inevitable part of us, an intrinsic side effect of a fundamentally sound system. Put differently, understanding the origins of our mistakes is the only way we can learn to deal with them, as both a practical and emotional matter.

How does knowing why we get things wrong help us cope with that experience? For one thing, it means we must recognize that we can’t eliminate mistakes from our lives. For another, it means we cannot assume that those who err are indolent, idiotic, or immoral — or that error can be addressed by ferreting out these imagined bad apples. If mistakes are an inevitable byproduct of intelligence, you cannot make a more reliable pilot, a better doctor, or a safer nuclear reactor operator by demonizing and shaming those who err.

Instead, it suggests that we should work with rather than against our natural reasoning processes to try to prevent mistakes and mitigate their consequences. This is doable. In fact, it’s been done. The aviation industry has turned itself into what is arguably the safest high-stakes industry in the world by cultivating a productive obsession with error. Aviation personnel are encouraged and in some cases even required to report mistakes, because the industry recognizes that a culture of shame doesn’t discourage error. It merely discourages people from acknowledging and learning from their mistakes. Cockpits are equipped with multiple backup systems — from copilots to autopilots to automated warnings to emergency checklists — to compensate for the most probable sources of human error. And those mistakes that do occur are exhaustively investigated in an effort to prevent them in the future.

Likewise, the health care industry is starting to take seriously the proposition that medical error is a systemic problem — one that cannot be solved by blaming individual doctors or denying the scope of the problem, but only by an equally systemic solution. The health care quality movement is promoting public reporting of medical errors and has successfully pushed 35 states to pass “I’m sorry” laws, which prevent physicians’ apologies from being used against them in malpractice suits. Both innovations serve to foster a culture of openness where errors can be better understood, prevented, and resolved. Similarly, hospitals are increasingly adopting computerized monitoring systems to detect and prevent “adverse drug events” — instances where human error leads to dangerously incorrect drug combinations or dosages. And what is true for medicine and aviation is true in general. Embracing our fallibility is the only way to build effective backup systems to prevent or mitigate mistakes — whether those systems are as sophisticated as the cockpit of an airplane or (as surgeon and writer Atul Gawande has convincingly argued) as simple as a checklist in the operating room.

Moreover, as it turns out, paying attention to error pays. When the University of Michigan medical system implemented a systemwide policy of admitting medical errors, apologizing to those affected, and actively working to explain and compensate for the error, its annual legal fees dropped from $3 million to $1 million. Implementing computerized monitoring systems at every hospital in the country would not only prevent 200,000 adverse drug events each year, but save an estimated billion dollars, according to the RAND Corporation.

If it behooves companies in such material and moral ways to accept their fallibility and own up to their mistakes, surely the same goes for each of us as individuals — and for all of us as a community. Recognizing that error is an inevitable part of our lives frees us from despising ourselves — and forbids us from looking down on others — for getting things wrong. Once we recognize that we do not err out of laziness, stupidity, or evil intent, we can liberate ourselves from the impossible burden of trying to be permanently right. We can take seriously the proposition that we could be in error, without deeming ourselves idiotic or unworthy. We can respond to the mistakes (or putative mistakes) of those around us with empathy and generosity. We can demand that our business and political leaders acknowledge and redress their errors rather than ignoring or denying them. In short, a better relationship with wrongness can lead to better relationships in general — whether between family members, colleagues, neighbors, or nations.

Embracing fallibility to prevent catastrophic error, embracing fallibility to prevent conflict: These are two hugely worthy goals. But learning to do either one consistently is close to impossible as long as we insist that mistakes are made only by morons, and that an intelligent, principled, hard-working mind is the only backup we need. This is the deep meaning behind the pat cliché “to err is human.” Take away the ability of an intelligent, principled, hard-working mind to get it wrong, and you take away the whole thing.

Kathryn Schulz is the author of ”Being Wrong: Adventures in the Margin of Error.” She will be reading from her book at 7 p.m. on June 18 at the Harvard Book Store in Cambridge. She can be reached at