Why did Most Polls Miss Trump?

Why did the political polls in the US fail to predict Donald Trump’s win at the 2016 presidential elections? Voice of America Journalist Maia Kay Kvartskhava Interviewed Courtney Kennedy, Director of Survey Research at Pew Research Center  

There is a great deal of speculation but no clear answers to why and how the polls could have gone so wrong about the state of the 2016 presidential elections. How can you explain these errors?

I think a couple of things happened. At the national level I think there is something of an overreaction because Hillary Clinton won the popular vote. And most polls suggested that would happen. At the state level the polling errors were really more substantial. In a number of states the poll results were vastly different from the actual election outcome. I think that there is a real problem there.

The polling committee doesn’t know for sure exactly what caused these errors but it could be what we call a systematic non-response bias.

What percentage are we talking about?

I’d just be guessing at this point. One could look at the discrepancy between poll results and actual outcome and try to make an estimate, but all the polls were different and I can’t give you a figure for that. But from what I can tell, the most likely explanation may be that a lot of Trump supporters were not taking the surveys, which is troubling because that is a very hard thing for us to crack. I think there are things that we can do, explore them and try to address them, but that one is a difficult problem.

Something else that’s been discussed as a possible contributor is that Trump supporters were in fact taking surveys but might not have admitted that they were going to vote for Donald Trump.

So called “Shy Trumpers?”

Yes. I don’t doubt it happened but I don’t see any compelling evidence to suggest that there were enough people doing this to make it a sizable enough problem to really contribute to what happened.

The third explanation that I do think probably played an important role, is turnout. In particular, pollsters who were trying to project an outcome have to try to anticipate, to model, not only the overall level of turnout, but more importantly at the key subgroup level: among blacks, among Latinos, young, old, republicans, democrats; what are the turnout rates going to be within those groups? And I think it looks like some of the actual turnout rates were different from what people were expecting based on 2008, based on 2012, based on historical voting patterns.

About half of the voters, around 46.7%, did not vote this year on November 8. Did Pew do a survey on election participation?

No, that’s not something that we looked at.

Errors were across the board, not just at Pew. Could there also be common errors in survey administration?

Yes, absolutely! I think it could be the case… One of the things that pollsters will have to tease out this year is that there is tremendous variation right now in polling and how pollsters conduct their surveys. I think we are in a period of historically unprecedented variation in polling. You have traditional telephone polls. Within that you have some pollsters drawing their samples from voter rolls, you have other pollsters drawing their sample using random digit dial, you have variation there. You now also have this relatively new space of Internet pollsters and within that space you have some that recruit their sample offline, either through the mail or on the phone, and they get people to respond on the Internet. Or you have other pollsters who are doing all that work on the Internet: they are recruiting people through websites, advertisement, social media and doing interviews on the Internet. Also you have mixes: pollsters that are doing hybrids of those two things. Some polls are doing some telephone work usually on landlines and then combining that with the sample of people interviewed on the Internet, trying to get younger, more diverse voters and weighing these two things together.

The November elections in the US are often compared to Brexit. Traditional polls also got Brexit wrong and people after the June 23referendum woke up in a new reality: Great Britain leaving the EU. Is this a reasonable comparison?

I think there are a lot of parallels. Going into the election, like everyone in the country I didn’t think that our general election would look like a Brexit vote but it did, frankly. If you look at not just having a relatively systematic miss in the polling but why there was a miss? What groups seem to have been underrepresented in the polls both in the UK and the US? They look very similar.

One thing I thought would have protected us in the US is that we have, as I said, a large number of polls using different methodologies. You’d think that this variation would help. And we also have a number of polls that are conducted very scientifically, using what we call a probability-based sample where everyone in the US had a chance of being selected for the poll. That’s not necessarily the case in the UK polling, which tends to be not probability-based; a lot of it is online not starting with a nationally representative sample. So I thought some of those features of the polling industry in the US would protect us, but at the end of the day it really didn’t.

There were several “.com” type surveys, especially during the last weeks before the elections that were showing that Donald Tramp was way ahead of Hillary Clinton. They were clearly not scientific, yet they got it better than mainstream media: Washington Post, CNN, or NBC.  Why?

I honestly don’t know. That’s one of the things that we have to take a look at. It is true. I think the Investor Business Daily and TIPP poll was one of the most accurate ones. It was conducted entirely online. And the reaction from mainstream media was understandable: if you look at how these polls were conducted, who it covers, from a scientific standpoint there is not a strong foundation to believe that that would be nationally representative… One of the reasons people look at the online polls skeptically is that because we know that there is at least 10 to 20 % of population which doesn’t use the Internet. But at the end of the day we really want to understand what the reason was that their results were more accurate. Maybe they were able to get a more representative sample? Maybe they were better able to tap into some of the Trump support base relative to the major national media? That’s just a guess. We don’t know at this point and we have to dig in and figure it out.

Are you planning to investigate what happened at Pew or at the national level? Many Americans don’t seem to believe in polls any more.

Well, personally, I’m doing two things: I’m leading an internal inquiry at Pew Research Center to see what the magnitude potentially of the errors was in our own polling; what we can do to try to fix that. I’m also leading a committee of pollsters across the United States on behalf of our professional association – American Association of Public Opinion Research, known as AAPOR. We are doing analysis to support the entire report looking at performance of polls in 2016, looking into how it compares historically, looking at the different methodologies used to see if some are more accurate than others and trying to explain why the polls were off this year. 

By Maia Kay Kvartskhava

17 November 2016 17:13