For one small subcommunity of America, the man who benefited the most from the country’s decisions at the polls on Tuesday was not Barack Obama - it was Nate Silver, statistician and creator of the FiveThirtyEight blog on The New York Times website.
Based on current election returns, Silver correctly predicted the outcomes of all 50 states, with the result in Florida still pending. Given his track record - he got 49 out of 50 right in 2008 - Silver appears to have ushered in a new level of credibility for statistical analysis in politics.
But if Silver has a crystal ball, its surface is still somewhat clouded; in any sort of forecasting, there are elements of uncertainty and margins of error, something Silver notes constantly in his writing.
Still, near-perfect results two elections in a row suggest that Silver's model is particularly powerful, especially considering the confused pundit-blather in the weeks preceding Election Day. Just how unlikely was it that Silver would go 50-for-50?
The best place to turn is Silver's own projections.
Based on state polling data, Silver projected the probability that either Obama or Romney would carry each state. In one sense, much of the work was already done for him; the majority of states were so polarized as to be no-brainers. According to Silver, 38 states had more than a 99 percent chance of going to either Obama or Romney, and 44 states were more than 90 percent likely to be won by one candidate over the other.
Essentially, Silver was faced with the task of calling five or six states in which some significant uncertainty remained.
Now, finding the probability that Silver would go a perfect 50-for-50 isn't as simple as multiplying all the individual probabilities for each state. That would assume that each state's polling was independent from that of all of the other states, which doesn't seem realistic, especially since the same polling companies - YouGov, PPP, etc. - factor into Silver's analysis for many different states. In fact, Silver was guilty of this error in a post he authored after the conclusion of the 2011 MLB season, when he attempted to calculate the unlikelihood of the events of the season's last day.
However, we can look elsewhere in Silver's analysis for a better answer. On his blog, Silver also provides a histogram representing the probabilities of President Obama winning specific numbers of electoral votes. He lists the odds of Obama winning exactly 332 electoral votes - which, assuming Florida goes to the president, would match Silver's 50-for-50 prediction - at just over 20 percent. This suggests that Silver was the beneficiary of quite a bit of luck himself; his chances of perfectly predicting every state were four-to-one.
But there may be a better way of evaluating Silver's predictions than a binary right-wrong analysis. After all, the large number of states that were sure things makes it difficult to determine just exactly how impressive his accomplishment was. To see just how precise Silver's projections were, it is more instructive to compare the exact percentages he predicted for each state with the actual results from Election Day. Below, I've listed these numbers along with the margin of error Silver estimated in his predictions for each state and the amount his projections differed from Tuesday's returns - the actual margin of error.
Using this methodology, Silver’s record looks a lot less clean. The actual election results in 16 states fell outside the margin of error Silver allotted himself in his projections, reducing his total to 34-for-50, or 68 percent. He was furthest off in Mississippi, which wasn't nearly as lopsided as he predicted, and West Virginia, which voted more Republican than expected. Of course, Silver was still within 2 percent on 19 states, an impressive feat in itself.
The takeaway here is that, while Silver’s work the last four years has been impressive, he is not a mysterious wizard - for example, both the Huffington Post and Princeton's Sam Wang had similarly accurate results. He is also not infallible, and he would be the first to admit it.
Forecasting is never an area where we should expect 100 percent accuracy, and though Silver's work is bringing a lot of positive attention to statistical analysis in general, it's important that people keep their expectations of its applications realistic.
UPDATE: The graph above actually understates the projected margin of error Silver allows himself by a factor of two. Here is the updated table.
Silver did much better than I gave him credit for initially. Forty-eight out of 50 states actually fell within his margin of error, giving him a success rate of 96 percent. And assuming that his projected margin of error figures represent 95 percent confidence intervals, which it is likely they did, Silver performed just about exactly as well as he would expect to over 50 trials. Wizard, indeed.
The author is solely responsible for the content.
He has also authored or made contributions to many books, including the Sports Illustrateds 100 Fenway: A Fascinating First Century.
Now living in Marblehead, hes focusing his attention on the Boston sports scene, specifically delving into the numbers affecting the Red Sox, Patriots, Celtics and Bruins, with the goal of informing and entertaining real fans. You can follow him on Twitter at @SabinoSports.