A Brief Post-Mortem and Path Forward for Reporting on Polls and Predictions

There was a clear narrative after the 2016 election: the polls were wrong. Poll aggregators in the media who mathematically average together election polls had a clear victor picked out on the morning of November 8—and it wasn’t Donald Trump. The Huffington Post gave Hillary Clinton a 98 percent chance of winning, the New York Times’ The Upshot gave Clinton a 85 percent chance of winning, and even the cautious FiveThirtyEight’s prediction had Clinton at a 71 percent chance of winning.

The problem with these predictions is that they’re pure statistics, and embedded within any statistic is error. A lot of it.

One could argue that because Clinton won the popular vote, polls on the national level, at least, were well within their predictions. And it is true that the 3 percent margin by which Clinton captured the popular vote is within the prediction of national polls released days before the election. But as for state polls, especially in Pennsylvania, Michigan and Wisconsin, pollsters missed the mark.

But polls are allowed to be wrong. What was problematic was that the media presented the election as “already decided” heading into election night. This puts the whole apparatus of public opinion polling at stake. As RNC pollster Jonathan Barnett said “The pollsters have lost a lot of credibility and won’t be believed on anything soon.” So, moving forward, how should journalists ensure that this kind of groupthink doesn’t warp our expectations again?

The first step would be including the concepts of error and doubt into our discussions about polls.

For example, if candidate A received 45 percent of the vote in a recent poll, and the margin of error is +/-3 percent (a standard margin), we shouldn’t conceptualize the data as “possibly a bit higher or lower than 45 percent.” The polling result really means that, given the polling company uses the standard 95 percent confidence interval, that 95 percent of the time the result will fall between 42 percent and 48 percent, with the average result being candidate A receiving 45 percent.

If we want to dissect this more, we can cut the margin of error in half (+/-1.5 percent) and make a new confidence interval that is still statistically correct: 68 percent of the time, candidate A will receive between 43.5 percent and 46.5 percent of the vote.

In plain English, there’s a 1-in-3 chance that candidate A will be about 2 percent away from the average prediction. And in a tight race where candidate B is polling at about 43 percent, with their own +/-3 percent margin of error, the only certainty should be uncertainty. What looks like candidate A leading candidate B 45 percent to 43 percent could plausibly be candidate B leading candidate A 46 percent to 42 percent. Confusing, right?

The moral of this intensive example is that statistics deserve nuance.

“In statistics, nuance is honesty,” Dr. Kerric Harvey, professor of research methods at The George Washington University’s School of Media and Public Affairs said in an interview with MediaFile. “Things like margin of error, sample size, and methodology give the statistics depth and dimensions. Without these dimensions, you’re affecting the actual usefulness of the numbers you are reporting.”

When polls are put in language that isn’t as cut and dry as the media usually presents them and error is tackled correctly, it becomes more reasonable to a voter that Trump beat expectations by as little as 1 or 2 percent in some battleground states.

Another change media companies should make is hybridizing their prediction models to include both quantitative and qualitative findings. This means mixing polls with on-the-ground reporting from the states those polls cover. The more granular the reporting, the better. Which candidate seems to have the energy in that state? What do the swing counties and precincts look like? What are the voter registration tallies and early voting results?

NBC, for one, has already signaled that they are prepared to make this shift: “We have to make some big changes in that area [polling],” Andy Lack, chairman of NBC News, said last month. I think we have to get out in the country more, and live in these states more, and that will take a bigger commitment from us going in. I think we relied too much on polls, and not enough on our own good reporting inside Michigan and Pennsylvania and Ohio.”

But this type of research is easier said than done.

“Qualitative work takes time and experience,” Dr. Harvey said. “It requires sustained, long-term research in the field, and not just running up to election day.”

By reporting on polls incorrectly, journalists drain from the credibility and trust of polls themselves. When the public no longer trusts the public opinion, it makes it hard for us use to important metrics to quantify politics in an understandable way. Now is the time for journalists to reevaluate their practices when it comes to predicting elections and covering polling, and make meaningful changes. It’s always healthy, in any field of research, to assume that you can be wrong.

Leave a Reply

Your email address will not be published. Required fields are marked *