What the 2016 Presidential Election taught us about polling, predictions

What the 2016 Presidential Election taught us about polling, predictions

by | Nov 15, 2016 | Polls | 0 comments

It’s been nearly two weeks since Donald Trump won the 2016 U.S. Presidential Election, becoming the country’s 45th president-elect. Trump claimed victory with the majority of the electoral votes (290 to Hillary Clinton’s 232); Clinton won the popular vote (61,318,162 votes to Trump’s 60,541,308).

As election night wore on and votes were counted November 8, it was clear the tallies would not add up to earlier poll (New York Times Upshot, FiveThirtyEight) predictions of a win for Clinton. What began as a relatively quiet night for those sporting red Make America Great Again caps at a Trump party in Midtown Manhattan would turn into a boisterous celebration. Democrats, on the other hand, grew increasingly somber as the hours passed and they waited for their leading lady to address them at Manhattan’s glass-ceilinged Jacob Javits Center. She would never take the stage. Half an hour after the chair of Clinton’s campaign told everyone to “head home and get some sleep,” Clinton conceded the election to Trump by phone.

We’ve written on this site before about the pitfalls in polling, but how could the polls that night – and just days before – have been so off? It’s a question that’s left some scratching their heads.

What are the lessons we should learn about data journalism and polling from the mismatch between
prediction and outcome in the 2016 Presidential Election? 

 

We put this question to the statistics and math experts of the STATS.org advisory board. Here’s what they said:

Rebecca Goldin, Ph.D., Director of STATS.org: Pollsters always think about who might not be answering the phone: working women, for example, may be less likely to answer than women who stay home. Unenthusiastic voters may be less interested in talking to pollsters about their candidate than enthusiastic ones. In the 2016 Presidential Election, a large population of people who supported Donald Trump felt some distrust toward the media and toward pollsters; this distrust may well have been stoked by Mr. Trump who consistently said that the polls were wrong—even calling one poll “dirty” back in June. Perhaps some Trump supporters were reluctant to participate polls, despite their enthusiasm for their candidate. The models predicting the outcome used various methods to predict who would turn out; they simply may not have had the data to predict a strong turnout by people who weren’t, generally speaking, talking to pollsters.

 

Karla Ballman, Ph.D., Cornell Medical, STATS.org Adviser: The major lesson about data journalism and polling to be taken from the mismatch between prediction and outcome in the 2016 presidential election is the necessity to evaluate how representative the poll sample is of the population of people who will actually vote. This is difficult because the first step is trying to determine the population of likely voters and then getting a representative sample of those individuals. It may have been the case that in this election, that it was assumed that the population of voters would be as for the previous election and this was not the case. Hence, if the polls obtained a representative sample of those individuals who voted last election but this was not the voter population for this election, the results would be biased. It could also be the case that pollsters have become complacent in selection of samples (opting for obtaining subjects that are more accessible), which would also introduce biases. The lesson is to understand how the polling sample was selected and then try to critically evaluate whether it is likely to be representative of those who will vote—not an easy task.

 

Giles Hooker, Ph.D., Cornell University, STATS.org Adviser: If there is one lesson for data journalism from the polling and prediction errors in this election it’s “pay attention to uncertainty.” This means all of: how you model variability in your data; how this feeds into uncertainty about your results; how you validate those models and how you interpret your results. FiveThirtyEight went into the election giving Trump about a 1/3 chance of winning, meaning we’d expect to see this sort of upset about 1 time in 3—it’s not all that surprising. Communicating this clearly has to be a goal of data journalism.

Nate Silver has considerably less egg on his landing pages than others though, and that comes down to models for uncertainty and how they feed through to predictions. One of the keys is correlations: different polls tend to be off from the truth in the same way. This means that adding one poll result to another isn’t the same as having twice as much data. Polls also have systematic biases which their confidence intervals don’t capture but which you can see by looking at differences between different polling organizations. In outcomes, nearby states are correlated—if Trump beats his polls in Michigan, he’s likely to do the same in Wisconsin. That means he had much more of a chance of winning overall than if each state moved independently. This makes sense, intuitively, and you can see it in the historical data; it also drastically changes your certainty about the outcome (in this case from FiveThirtyEight’s 33 percent for Trump as far as 0.1 percent in some naive models).

There is another layer here: individual poll results are based on a whole suite of models and assumptions. One particular failure this election appears to be likely-voter models. Those are almost impossible to verify, but you can again look at the historical record for how well polls have projected the final result—it’s often off by more than the interval they give.

What does this mean for a data journalist? You need to think about variability and how that translates to outcomes. You need to ask whether standard packages use assumptions that are realistic; this means translating mathematics into real-world meaning and asking if it’s reasonable. Where possible, try to find data to validate how far off you might expect to be. Then add some more uncertainty for everything we can’t measure and – with much more difficulty – find ways to communicate that we really don’t know as much as we think.

 

Jenna Krall, Ph.D., George Mason University, STATS.org Adviser: Predictions prior to the election generally had Hillary Clinton leading Donald Trump, though individual models assigned different probabilities to a Clinton win.  Donald Trump won the election, so are all these predictive models equally poor? The FiveThirtyEight model gave Clinton a 71.4 percent chance of winning, which means that we would expect a Trump win a little less than 1 out of 3 times. Under the FiveThirtyEight model, a Trump win is not that surprising. However, under the model presented by the Huffington Post, Clinton had a 98 percent probability of winning, so we would expect a Trump win 1 out of 50 times. Given the results of the election, the FiveThirtyEight model is more plausible than competing models that assigned a higher probability to a Clinton win. The FiveThirtyEight model included additional uncertainty to account for possible polling error, among other things. The thoughtful incorporation of uncertainty into election predictions may be one way that “good” models can be differentiated from “bad” models.

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This