What are the odds?
After centuries of dispute, a probability theory, rooted in common sense, wins out
What’s the plural of “statistician’’? A quarrel.
It’s a minor but telling point that having read Sharon Bertsch McGrayne’s “The Theory That Would Not Die’’ I find jokes like this kind of funny. McGrayne’s book transforms what many would consider to be a history of boredom — the philosophical and historical foundations of a debate between warring factions of statisticians — into an intellectual romp touching on, among other topics, military ingenuity, the origins of modern epidemiology, and the theological foundation of modern mathematics.
The book’s central debate involves competing epistemologies — in this case, a philosophy that supports scientific decision-making. The seemingly obvious idea is the brainchild of Thomas Bayes, an 18th-century renegade priest and amateur mathematician. “Bayes’s rule’’ seems banal in its simplicity. “[B]y updating our initial belief about something,’’ McGrayne writes, “with objective new information, we get a new and improved belief.’’ Isn’t this how everyone negotiates the world? But the implications for scientific inquiry were staggering, and have remained contentious until the beginning of our 21st century. Why? Notice that the rule commences with belief. Starting from informed belief — hunches, guesses, intuition — Bayes, or actually his more sophisticated successors, devised formal ways of measuring whether additional information added or detracted from the probability that one’s initial notion was true.
Bayes’s theoretical, epistemological opponent is “frequentism.’’ For frequentists, belief means nothing; objectivity and validity occur only through repeated observation of a replicable phenomenon until enough data is amassed for a meaningful sample. The theoretical structure of frequentist statistics presumes that something that has never happened in the past can never happen in the future. This distorts reality. As McGrayne shows, for frequentists, statistically speaking, airplanes couldn’t collide in mid-air. Until, of course, they do, at which point it’s a bit late to calculate the probability of error or determine an actuarial table.
Although Bayesian computations are devilishly complex, their results are common. Consider the spam filter. An e-mail that comes from email@example.com including the word Viagra in the subject line isn’t likely to be legit. “Bayesian methods,’’ McGrayne writes, “attack spam by using words and phrases in the message to determine the probability that the message is unwanted.’’ It’s a highly evolved way of using context clues, and each time a piece of spam lands in the reject folder and stays there the algorithm has, in a way, that much more reason to believe its preprogrammed hunches.
Since its initial formulation, Bayes’s rule provoked strong reactions in academic mathematics, as conservative a realm as exists. McGrayne details the battles, but the book’s most engaging chapters focus on the rule’s practical applications.
In 1966 during a mid-air refueling, a US bomber burst into flames, spreading its four hydrogen bombs over the Spanish coast. Three of the weapons were quickly located but the fourth was lost until searchers applied Bayes’s rule. After consulting with experts on ocean currents, weather patterns, and aeronautics, the search team still didn’t have a solid lead. The leader of the Navy crew, John Craven, decided to “imagineer’’ a plan. Using all of the data at hand, Craven mapped a search area and then guessed what he felt the most likely location of the bomb. Each time searchers failed to locate it, the probability that the weapon was elsewhere was adjusted and the plan revised until it was finally discovered.
That Bayes’s pragmatism lends itself most successfully to singular military, business, and personal obstacles and not to pure research offers one of book’s subtler lessons. Bayes’s rule doesn’t spark innovation; it solves problems. McGrayne doesn’t pursue the implications of this, but her work gestures toward that important realization. Given the ever-increasing complexity of our world, scientific expertise offers itself as a refuge against uncertainty. But in the universities pure research doesn’t care about you — the single instance means little to such researchers. And in the world of applied science, technicians concern themselves with solving problems often of their own making, or suggesting your next impulse purchase. The result is an impasse, a lacuna in which the world as we live it is not the world that is researched, nor the world that is sold.
These darker considerations aside, “The Theory That Would Not Die’’ is a masterfully researched tale of human struggle and accomplishment, and it renders perplexing mathematical debates digestible and vivid for even the most lay of audiences. Acknowledging ignorance is the first step toward knowledge, yes, and when we wed our ignorance with our better instincts we often find the best possible second step.
Michael Washburn, a research associate at the Center for Place, Culture, and Politics, City University of New York, can be reached at www.michaelwashburn.org.