Op-ed: Rethinking the Polls. Why many polling stories mislead readers and viewers – and why that is really important in a scary election.

This is the first in a series of op-eds about polling in the election. John E. Newhagen is an associate professor emeritus at the University of Maryland’s Philip Merrill College of Journalism.

Most polling stories go straight to the horserace – they focus on who is ahead and by how much. The problem is that those numbers are misleading, and may even be wrong, in a tight election if they fall within the poll’s margin of error or confidence interval (C.I.). This becomes a big issue in a scary election, such as the one we are currently witnessing.

For example, here is the way the September 11th ABC News/Washington Post poll was reported by aggregator Real Clear Politics:

Clinton has 46 percent support among likely voters in the latest ABC News/Washington Post poll, with 41 percent for Trump, 9 percent for Libertarian Gary Johnson and 2 percent for Jill Stein of the Green Party.

Clinton’s 5-point advantage is within this poll’s margin of error, but it bears up in the context of consistent results among likely voters all summer.

The first paragraph leads the reader to believe Clinton has a percent edge. But, the second paragraph acknowledges a problem with a big C.I., implying it is okay to look at the mean values because other earlier polls showed the same trend. Shame! The author should know better; there is no statistical foundation for the claim that past results, which probably had the same problem, justify overlooking the C.I. dilemma here. As such, no inference about who is ahead can be drawn from this poll.

Now, it might be argued that this is academic nitpicking and that while there is a chance the poll may be off “a little” we “really” believe the results are pretty close to the “true values.” But in truth, there is no nitpicking here. Simply reporting the mean values from the poll with results this close and a C.I. this big (plus or minus percent) is really risky and misrepresents the data.

The correct way to think about the mean values in survey data is to view them as anchors for  upper and lower boundaries of the C.I. Strong inferences can only be made about who is ahead when the lower bound of one candidate’s C.I. is above the upper bound of the other candidate’s–and that is not the case here. Clinton’s “true” level of support could be as high as 51 percent and as low as 41 percent. Similarly, Trump’s support could be anywhere between 46 percent and 36 percent. The two C.I.s overlap by 5 percent. The bar chart shows scenarios in which either Clinton or Trump could be winning by comfortable margins, or that the race may be a dead heat. The problem is that the “true” values are equally likely to be anywhere within the C.I.’s boundaries, and the poll gives us no guidance about just where in that range that value may reside. Several polls reported only a day or two after this poll was published showed Trump closing in on his opponent by 2 to 5 percent, and only underlines the reality of the problem. If those results are “real” it would wipe out the lead reported by ABC News-Washington Post and the election would be a dead heat.

The fact is that in a race this tight and with a C.I. this large, polls are not sensitive enough to detect real differences between the candidates. The danger comes when a poll story leads with the mean values, because the vast majority of readers will come away thinking that Clinton has a five-point lead, regardless of any disclaimer about the C.I. that follows.  

This is worrisome because there are public opinion theories out there, such as the “spiral of silence” and the “bandwagon effect,” that suggest that the results of opinion polls can affect election outcomes.

Further, this is especially true in a scary election such as the one currently in progress. Now, it is not scary in the sense that the candidates may be frightening – although there is no dearth of pundits who are saying just that – but, scary because there are patterns in the electoral “ecology” that are substantially different in most elections.  If things were “normal,” past electoral voting patterns would be a safer guide. It does appear that support for Clinton is coming from a traditional Democratic liberal base that is predictable; but Trump – if nothing else – is not a traditional Republican, and using past elections as a guide to predict who will actually vote for him could be a scary exercise.  Nowhere is this truer than in the definition of what the normal pollster thinks a “likely voter” will be when filtering data as the ABC News/Washington Post example here does. If assumptions about turnout are off by just a percentage or two, polls like this are bound for the dumpster. This will be the topic for the next essay in this series.

Pollsters know about this problem but they are in a bind when it comes to solving it. The only variable they really control that can reduce the C.I is sample size. The problem is that the relationship between sample size and the C.I. is not linear. To achieve a C.I. (say 2 percent) that would have allowed for the kind of causal inference pollsters really want here would have required a sample nearly four times as large as the one they used, or about 2,400.  That would have meant data gathering would have gone from three days to a full week.  If something scary happens halfway through the process, such as one candidate “buckling” getting into the car due to pneumonia, the poll is in real trouble. It also blows the budget, which is a real factor in the news business these days.

A further cautionary note should be added here about Internet studies, where the samples may be huge. Polls done by Survey Monkey are some of the main offenders. The small C.I.s associated with these polls bother many experts for both statistical and methodological reasons, which is yet another topic for future consideration.

So what are the alternatives for writing a story that is both technically correct and comprehensible to readers and viewers?

  • The current practice of making the difference between mean values prominent when there is overlap between the C.I.s is misleading and should be avoided.

Describing the data in fine technical detail will only result in a story no one but experts can understand, which is not an option.

  • Describing a poll with a C.I. problem as a “dead heat” or “too close to call” is misleading as well. In the example given here, Clinton could enjoy a 15 percent  lead, while it is just as likely that Trump could be ahead by 5 percent. So this ought not be an option either.
  • How about a more honest reflection of the sensitivity of the poll written in plain English? For example:  

“Results from a recent poll show that Clinton may lead by as much as 15 percent  but it is equally as likely that Trump could be ahead by 5 percent While accurate and fairly comprehensible, this is a lead that will never see the light of day in contemporary journalism.

So, in the absence of a practical alternative, the real conclusion may be that the use of telephone public opinion racehorse polls in journalism is obsolete. A fact that a lot of people in the industry know but are simply not willing to admit it.

But there is hope! One alternative worth examining is the use of a novel methodology to generate what the New York Times calls “the odds of winning,” which will, alas, be the topic for a future post.


As long as the top of the C.I. for Trump (45%) is above the bottom of the C.I. for Clinton (42%) (The Red Zone) NO inference can be drawn about who is ahead from this Poll. Trump could be ahead by 3 percent.

Leave a Reply

Your email address will not be published.