Voting booths in Bangor, Maine, on Nov. 3. (Brianna Soukup/Portland Press Herald via Getty Images)
(Brianna Soukup/Portland Press Herald via Getty Images)

Taken in the aggregate, preelection polls in the United States pointed to the strong likelihood that Democrat Joe Biden would pick up several states that Hillary Clinton lost in 2016 and, in the process, win a popular and electoral vote majority over Republican President Donald Trump. That indeed came to pass. But the election was much closer than polls suggested in several battleground states (e.g., Wisconsin) and more decisive for Trump elsewhere (e.g., Ohio). Democrats also were disappointed at failing to pick up outright control of the U.S. Senate – though it remains a possibility – and at losing seats in the U.S. House and several state legislatures.

Many who follow public opinion polls are understandably asking how these outcomes could happen, especially after the fairly aggressive steps the polling community took to understand and address problems that surfaced in 2016. We are asking ourselves the same thing. In this post, we’ll take a preliminary shot at answering that question, characterizing the nature and scope of the 2020 polling errors and suggesting some possible causes. We’ll also consider what this year’s errors might mean for issue-focused surveys, though it will be many months before the industry will be able to collect all the data necessary to come to any solid conclusions.  

Preelection polls in the United States pointed to the likelihood that Joe Biden would win a popular and electoral vote majority over Donald Trump. That indeed came to pass. But the election was much closer than polls suggested in several battleground states and more decisive for Trump elsewhere. 

Before talking about what went wrong, there are a couple of important caveats worth noting. First, given the Democratic-leaning tendency to vote by mail this year and the fact that mail votes are counted later in many places, the size of the polling errors – especially at the national level – will likely end up being smaller than they appeared on election night. Even this week, vote counting continues and estimates of polling errors have shrunk somewhat in many battleground states. It’s also important to recognize that not all states suffered a polling misfire. In many important states that Biden won (at least based on current vote totals), including Arizona, Colorado, Georgia, Minnesota, New Mexico, Nevada and Virginia, polls gave a solid read of the contest.

All that said, it’s clear that national and many state estimates were not just off, but off in the same direction: They favored the Democratic candidate. To measure by how much, we compared the actual  vote margins between Republicans and Democrats – both nationally and at the state level – with the margins in a weighted average of polls from FiveThirtyEight.com. Looking across the 12 battleground states from the upper Midwest (where many polls missed the mark) to the Sun Belt and Southwest (where many were stronger), polls overestimated the Democratic advantage by an average of about 4 percentage points. When looking at national polls, the Democratic overstatement will end up being similar, about 4 points, depending on the final vote count. That means state polling errors are about the same as in 2016, while the national polling error is slightly larger, at least as of today. Even so, the national polling error of 2020 appears to be similar to the average errors for election polls over the past 12 presidential elections.

The fact that the polling errors were not random, and that they almost uniformly involved underestimates of Republican rather than Democratic performance, points to a systematic cause or set of causes. At this early point in the post-election period, the theories about what went wrong fall roughly into four categories, each of which has different ramifications for the polling industry.

Partisan nonresponse

The suggested problem

According to this theory, Democratic voters were more easily reachable and/or just more willing than Republican voters to respond to surveys, and routine statistical adjustments fell short in correcting for the problem. A variant of this: The overall share of Republicans in survey samples was roughly correct, but the samples underrepresented the most hard-core Trump supporters in the party. One possible corollary of this theory is that Republicans’ widespread lack of trust in institutions like the news media – which sponsors a great deal of polling – led some people to not want to participate in polls.

Is this mainly an election polling problem, or would this be of wider concern to issue pollsters as well?

Sadly, the latter. If polls are systematically underrepresenting some types of conservatives or Republicans, it has ramifications for surveys that measure all kinds of behaviors and issues, from views on the coronavirus pandemic to attitudes toward climate change. Issue polling doesn’t require the kind of 51%-49% precision of modern presidential election polling, of course, but no pollster wants a systematic skew to their data, even if it’s “only” 5 percentage points. 

What could we do to fix it?

A straightforward fix to the problem of underrepresenting Trump supporters would be to increase efforts to recruit conservatives and Republicans to polls; increase the statistical weight of those already in the survey to match their share of the population (a process known as “weighting”); or both. Many polls this year weighted on party registration, 2016 vote or self-identified partisanship, but still underestimated GOP support.

The challenge here is twofold. The first is in estimating the correct share of conservatives and Republicans in the population, since, unlike age, gender and other demographic characteristics, there are no timely, authoritative benchmarks on political orientation. Second, just getting the overall share of Republicans in the poll correct may be insufficient if those who are willing to be interviewed are bad proxies for those who are not willing (e.g., more strongly conservative) – in which case a weighting adjustment within partisan groups may be needed.

‘Shy Trump’ voters

The suggested problem

According to this theory, not all poll respondents who supported Trump may have been honest about their support for him, either out of some sort of concern about being criticized for backing the president or simply a desire to mislead. Considerable research, including by Pew Research Center, has failed to turn up much evidence for this idea, but it remains plausible.

The fact that polls this year underestimated support for other, less controversial Republican candidates – sometimes by more than they underestimated support for Trump – suggests that the “shy Trump” hypothesis may not explain very much of the problem.

Is this mainly an election polling problem, or would this be of wider concern to issue pollsters as well?

This would pose a challenge for measuring attitudes about the president in any venue. But if it was limited to the current president, it would not have lasting impact. Polls on issues that are less sensitive might be less affected.

What could we do to fix it?

In the electoral context, this is a difficult problem to fix. Pollsters have experimented with approaches to doing so, such as asking respondents how their friends and neighbors planned to vote (in addition to asking respondents how they themselves planned to vote) and then using answers to these questions to adjust their forecasts. But the efficacy of these methods is still uncertain.

Still, the fact that polls this year underestimated support for other, less controversial Republican candidates – sometimes by more than they underestimated support for Trump – suggests that the “shy Trump” hypothesis may not explain very much of the problem.

Turnout error A: Underestimating enthusiasm for Trump

The suggested problem

Election polls, as opposed to issue polling, have an extra hurdle to clear in their attempt to be accurate: They have to predict which respondents are actually going to cast a ballot and then measure the race only among this subset of “likely voters.” Under this theory, it’s possible that the traditional “likely voter screens” that pollsters use just didn’t work as a way to measure Trump voters’ enthusiasm to turn out for their candidate. In this case, surveys may have had enough Trump voters in their samples, but not counted enough of them as likely voters.

Is this mainly an election polling problem, or would this be of wider concern to issue pollsters as well?

If the main problem this year was a failure to anticipate the size of Republican turnout, the accuracy of issue polls would be much less affected. It would suggest that survey samples may already adequately represent Americans of all political persuasions but still struggle to properly anticipate who will actually turn out to vote, which we know is quite difficult. Fortunately, the eventual availability of state voter records matched to many election surveys will make it possible to assess the extent to which turnout differences between Trump and Biden supporters explain the errors.

What could we do to fix it?

Back to the mines on reinventing likely voter scales.

Turnout error B: The pandemic effect

The suggested problem

The once-in-a-generation coronavirus pandemic dramatically altered how people intended to vote, with Democrats disproportionately concerned about the virus and using early voting (either by mail or in person) and Republicans more likely to vote in person on Election Day itself. In such an unusual year – with so many people voting early for the first time and some states changing their procedures – it’s possible that some Democrats who thought they had, or would, cast a ballot did not successfully do so. A related point is that Trump and the Republican Party conducted a more traditional get-out-the-vote effort in the campaign’s final weeks, with large rallies and door-to-door canvassing. These may have further confounded likely voter models.

Is this mainly an election polling problem, or would this be of wider concern to issue pollsters as well? 

To the extent that polls were distorted by the pandemic, the problems may be confined to this moment in time and this specific election. Issue polling would be unaffected.

To the extent that polls were distorted by the pandemic, the problems may be confined to this moment in time and this specific election. Issue polling would be unaffected. The pandemic may have created greater obstacles to voting for Democrats than Republicans, a possibility that polls would have a hard time assessing. These are not problems we typically confront with issue polling.

What could we do to fix it?

It’s possible that researchers could develop questions, such as on knowledge of the voting process, that could help predict whether the drop-off between intention to vote and having successfully cast a ballot is higher for some voters than others – for instance, whether a voter’s mailed ballot is successfully counted or may be rejected for some reason. Treating all early voters as definitely having voted and all Election Day voters as only possible voters is a potential mistake that can be avoided.

Conclusion

As we begin to study the performance of 2020 election polling in more detail, it’s also entirely possible that all of these factors contributed in some way  – a “perfect storm” that blew the polls off course.

Pew Research Center and other polling organizations will devote a great deal of effort to understanding what happened. Indeed, we have already begun to do so. We’ll conduct a review of our own polling, as well as a broader analysis of the polls, and we’ll participate in a task force established at the beginning of this year by the American Association for Public Opinion Research (AAPOR) to review election poll performance, as happened in 2016. This effort will take time. Relevant data on voter turnout will take months to compile. But make no mistake: We are committed to understanding the sources of the problem, fixing them and being transparent along the way.

Scott Keeter  is a senior survey advisor at Pew Research Center.
Courtney Kennedy  is Vice President of Methods and Innovation at Pew Research Center.
Claudia Deane  is Executive Vice President at Pew Research Center.