Op-ed: Rethinking the Polls. Whistling Past the Graveyard – The Threat of Low Response Rates in Political Polls

This is the second in a series of op-eds about polling in the election. Read the previous piece here. John E. Newhagen is an associate professor emeritus at the University of Maryland’s Philip Merrill College of Journalism. 

“Rip” Smith, played by Jimmy Stewart, solved the problem of getting respondents to take part in a telephone poll in the 1947 film Magic Town. He found the perfectly representative Midwestern city and simply asked a few folks around town how they felt about the day’s burning issues.

picture-clipping

Jimmy Stewart plays pollster Lawrence “Rip” Smith in the 1949 film Magic Town

Stewart’s character, just back from the war, had his work cut out for him. He wanted to break into the then new science of public opinion polling. But he was broke and his competition, George Stringer, was well established. Then he discovered Grandview, which was perfectly representative of the county as a whole. Of course all that fell apart when citizens found out what he was up to and became self-absorbed with their own importance.

 Residences of Grandview discuss whether they would vote for a female presidential candidate in the 1947 film Magic Town


Residences of Grandview discuss whether they would vote for a female presidential candidate in the 1947 film Magic Town

Well, in the intervening 67 years drawing a random sample with an acceptable confidence interval in three days’ time has become not just worse, but nearly intractable. This is even worse when the election is “scary” in the sense that some of the candidates, such as Donald Trump, are not bred in the traditional mainstream of the political system.

Here is what the pollster faces in drawing a good sample. The first part of the task is largely technical.

First, calls to non-human telephone exchanges such as computer modems, fax machines and other IT applications, have to be culled out. While most polls don’t report a number for that process, only one or two calls out of 10 may connect to a real human in a big city like New York City.

The next step is to filter out business exchanges, which is not as easy as it might sound.

Of course pollsters now have the additional problem of including cell phones – a process that makes many old-school sampling techniques problematic. It used to be that the first three digits of a telephone number were linked to a specific neighborhood, and a random numbers table in the back of a statistics textbook could be used to generate the last four numbers. This made it easy to insure the poll covered a geographic area evenly because census data could be used to make sure sample was representative. But now, the first three digits of a cell number offer no guidance in this regard and generating a regionally balance sample has become difficult.

Add to that the number of calls that terminate in voicemail boxes or in an outright hang up.

It is no wonder most commercial polling companies use computer systems that dial numbers automatically and feature voice recognition software to detect human pickups. When one of these systems hears a human voice, they connect to an interviewer who asks the questions that then appear on their computer monitor.

(If you receive a call and no one answers when you pick up, it may be a computer that did not connect. Similarly, it you receive a call from a suspicious number on your caller ID, pick up but don’t say anything. If no one responds there is likely a computer waiting to hear your voice. If you do respond and there is a delay followed by a lot of background noise and then someone finally greets you, possibly mispronouncing your name, you have just arrived at a telemarketer’s or pollster’s boiler room.)

So, the consulting firms hired by mainstream media outlets make a lot of mainly technical decisions to get to a living, breathing human being. It is worth noting, however, that problems can occur even at this early stage. What if a person running a small business uses the same line for personal communication? What about the calls that get dumped into voicemail boxes? What about the calls that end in a rude and abrupt hang up? Who are those people? Do folks who do not like pollsters also hold unique attitudes about politics? If they do, the poll could be in trouble because that group will be underrepresented and the sample may be biased.

Whistling Past the Grave Yard

The next step in reaching a 1,000+ respondent threshold is the really scary part for pollsters. All the sophisticated computer assisted polling technology in the world does not guarantee the human on the other end will talk. The problem here is response rate has taken a precipitous drop during the last few years. People increasingly do not want to take part in polls. This is such a worrisome problem that the American Association for Public Opinion Research (AAPOR) devoted an entire issue of its prestigious journal, Public Opinion Quarterly, to the topic. The PEW Charitable Trust reports that response rates have plunged from 36 percent to 9 percent between 1997 and 2012. Back in Jimmy Stewart’s day, those response rates were above 60 percent; and in a recent study, PEW reported a paltry 2 percent response rate!

Response rates from PEW telephone-based polls from 1997 to 2012. In the days when Magic Town was produced rates could been above 60 percent. Today they may sink as low as 2 percent.

Response rates from PEW telephone-based polls from 1997 to 2012. In the days when Magic Town was produced rates could been above 60 percent. Today they may sink as low as 2 percent.

Pollsters are quick to cite research that claims there is no evidence of systematic bias associated with low response rates, but they are whistling past the grave yard. If non-responses are systematic, the results may be biased. APPOR’s concern for the problem is reflected in its list of ethical standards, which calls for the publication of response rates alongside a poll’s results. The margin of error, a statistical artifact calculated from the sample’s size, does usually get published (See earlier post on this topic). However, an examination of polls reported by realclearpolitics.com found no one – not even the big names in the field – report response rates. Boring down four links into the methodology of an ABC News/Washington Post poll, for instance, yielded only an essay by the contractor citing research assuring the reader that low response rates are not a problem for the poll’s validity, not a response rate number. Assurances that a response rate of 5 percent or less are not a problem proposes the idea that the people who chose not to take part are doing so randomly. That is, they do not represent a group who systematically hold views that could be critical the central theme of the poll. If so, things are fine. But if they do represent a particular point of view, things are not fine. One such study adds a worrisome cautionary note, saying it is unclear under what circumstances — or for how long — this finding will hold. It ought not be a surprise the polling industry embraces the conclusion that everything is fine.

But it defies common sense to imagine that when only two out of 100 people opt in on a poll that something screwy isn’t going on in the sample. And remember, this is after perhaps another 100 telephone exchanges have been eliminated for technical reasons. Further, there are ample examples of respondent bias in polls:

  • Exit polls are notoriously inaccurate. The Washington Post’s reported results for the 2004 exit poll were the most inaccurate of any in the past five presidential elections. It said that procedural problems compounded by the refusal of large numbers of Republican voters lead to inflated estimates of support for John F. Kerry. This, of course, resulted in ABC News having one of its worst election nights ever, flipping back and forth in its projection of a Kerry victory. While exit polls do not claim to be random samples, the fact that Republicans were reluctant to take part should be worrisome in polls that do, especially when an unconventional candidate like Trump is the Party’s standard bearer.
  • In 1936 a poll commissioned by the prestigious weekly magazine Literary Digest, it was predicted that Republican Alf Landon would win over FDR by a 57-to-43 percent margin. The problem was that the magazine used telephone exchanges and automobile registration data as its sampling frame. Of course at the height of the Great Depression, more Republicans had cars and telephones than Democrats, and FDR was reelected. This example suggests Clinton could be underrepresented if low income minority groups who support her are not contacted or refuse to take part when they are.
  • Another notable disaster occurred when Gallup predicted that Republican Thomas Dewey would defeat Harry Truman in the 1948 election. The problem here was that in the run up to the election, many Southern Dixiecrats said they would vote for segregationist candidate, Strom Thurman, in protest of the Democrat Party’s embrace of the growing Civil Rights Movement. It turned out that enough disgruntled Southerners came home to the Democratic Party on Election Day one last time to keep Truman in office – something Gallup did not pick up on. Today, with disgust for both candidates registering record highs and third party candidates polling up to 15 percent, enough people may be shifting around in the political ecology and opting out of polls to throw results off.
deweytruman12

Harry Truman celebrates his 1948 victory with a copy of the Chicago Daily Tribune incorrectly giving the election to Thomas E. Dewey. This was the only election George Gallup’s polling firm has called wrong.

Two important points stand here that may have bearing on the current election:

First, when “scary” things happen, such as the Great Depression or the emergence of an important social upheaval such as the Civil Rights Movement, the time may be ripe for unexpected shifts in the electorate and assumptions pollsters make based on previous elections may not work.

Second, the people affected by those upheavals may not be willing to talk to pollsters. This takes on palpable corpus in the 2016 election because people are registering unprecedented distrust and dislike for both of the major parties’ candidates, Congress, mainstream media and perhaps even public opinion polls.

Thus, the climate is ripe for a number of scenarios that could throw the polls. Many Trump supporters tend to be white males with no college education. If they are distrustful of the polls and reticent to take part in them, it may lend Trump latent support survey data that pollsters are not picking up. Clinton’s support among the young, Hispanic and even some middle class white voters is reportedly soft. But come Election Day if enough of her supporters–many of whom may not have been taking part in polls – come out to vote, it could save the day for her.

Either of these propositions suggest cases in which voters might turn out–or not–in unexpected numbers. This goes to the response rate problem because if, at some point, key groups make unexpected course changes and are reluctant to tell pollsters about them, current results may be wrong.

The late Stanford professor Steve Chaffee, who was considered a wizard in political polling research, taught me about the value of something he called the “face value” of results, which boiled down to thinking out whether or not outcomes make common sense. Well, the fact that only one or two percent of all calls yield data, along with the turbulent nature of the current election, gives me pause when pollsters tell me not to worry about response rates.

Leave a Reply

Your email address will not be published. Required fields are marked *