Preelection polls in 2020 correctly forecast that Joe Biden would win a solid popular and electorate vote majority over Donald Trump, but the size of Biden’s victory fell short of the expectations generated by much of the polling. Similarly, many Republican candidates for other offices did better than the polling indicated they would do.

This week, the major professional association of U.S. pollsters, the American Association for Public Opinion Research (AAPOR), released a comprehensive report about the polling errors of 2020. In this Q&A, we discuss the report’s findings and their implications for polling with Josh Clinton, the chair of the AAPOR task force that produced the study. Clinton is also the Abby and Jon Winkelried chair and a professor of political science at Vanderbilt University. (Note: The questioner in this interview, Pew Research Center Senior Survey Advisor Scott Keeter, was also a member of the AAPOR task force.)

Josh, could you start out by giving us a brief overview of what you see as the most important findings of the task force?

A headshot of Josh Clinton, chair of the AAPOR task force on 2020 preelection polling and political science professor at Vanderbilt University.
Josh Clinton, chair of the AAPOR task force on 2020 preelection polling and political science professor at Vanderbilt University.

Despite the polls accurately pointing to a Biden victory, the polling error was the highest in 40 years for the national popular vote and the highest in at least 20 years for state-level estimates of the vote in presidential, senatorial and gubernatorial contests. And the error mostly went in the direction of overstating support for Biden and Democratic candidates relative to Trump and Republican candidates. These kinds of errors affected all types and modes of polling, whether conducted by phone or online and for all kinds of samples.

The errors did not appear to be caused by the same factors that AAPOR’s 2016 task force identified as potential reasons for polling errors in that year’s elections, such as late-deciding voters or a failure of polls to weight their data by educational level. But in at least one respect, the conclusions of the 2020 task force echoed those of the 2016 task force: We found no evidence that Trump supporters hid their support in the polls when asked who they intended to vote for.

One of more interesting challenges of the election for pollsters last year was the shift to early and absentee voting brought on by concerns about the coronavirus pandemic. But we found that pollsters generally were able to accurately account for how people were going to vote – that is, they didn’t end up with too few Election Day voters or too many early voters.

The task force was able to rule out a lot of possible causes for last year’s polling error but wasn’t able to identify a culprit. Why was it difficult to find a definitive cause?

It seemed clear to us that at least some of the polling error in 2020 was caused by nonresponse, meaning a systematic difference between those who did and did not participate in the polls. That could have been a result of too many Democrats or too few Republicans, differences in the kinds of Republicans and Democrats who did and did not respond, the number and kinds of new voters who took part, or some combination of these factors. Without better knowledge of who isn’t taking part in polls, it’s hard to identify why we are not getting an accurate view of the electorate.

“At least some of the polling error in 2020 was caused by nonresponse, meaning a systematic difference between those who did and did not participate in the polls.”

Is it possible to study or learn about those who don’t respond to surveys?

It is, but it is complicated. You either need to intentionally try to interview those who declined to respond to surveys – getting them to answer a new survey, for example – or else you need to be able to know the composition of the electorate to determine whether surveys that looked like the actual electorate were more accurate. Only recently have organizations like Pew Research Center started to describe the 2020 electorate in ways that are helpful to get a better handle on who was over- and underrepresented in preelection surveys.

The report mentions that “public statements by Trump could have transformed survey participation into a political act whereby his strongest supporters choose not to respond to polls.” If that’s the case, isn’t it a major problem for political polling going forward, considering you can’t force people to take polls?

That is an excellent question because of the important implications it raises. The truth of the matter is that we do not know. In 2018, the preelection polls were much more accurate than they were in 2016. The fact that the 2020 preelection polls were even worse than those in 2016 – despite how well the 2018 polls performed – suggests that the problem may have been unique to the candidacy of President Trump.

It is possible that the former president appealed to a set of voters who were voting specifically for him and who may choose not to vote when he is not on the ticket. If so, it’s possible that things return to normal levels of polling error in the future. But if participation in preelection surveys is now politicized, it is a serious problem and will require work to convince these voters to participate, even if Trump is not on the ballot. The results of 2022 will help let us know how serious of an issue preelection polling may face going forward.   

Voter turnout in 2020 was the highest in more than a century, meaning that a lot of new and irregular voters showed up. Do we have a sense of how that might have affected the accuracy of the polls?

The task force did not find evidence that errors in likely voter models – which are used to predict who in a survey will vote – were responsible for the errors in the poll’s results. But we do know that there were many new voters in 2020, and it is unclear whether the proportion of new voters in the polls matched the proportion of actual new voters. It is also unclear whether the new voters who responded to polls had similar opinions to those who did not respond.

Like AAPOR’s 2016 post-mortem study, this report references some steps that were taken to correct for possible error and yet did not prevent some significant problems. Does this suggest a kind of whack-a-mole dynamic, where constant fine-tuning does not guarantee success? Or was 2020 perhaps the perfect storm, with a pandemic, different ways of voting, high turnout, plus Trump?

“I would like to say that 2020 was pretty much a perfect storm for preelection polling.”

I would like to say that 2020 was pretty much a perfect storm for preelection polling. Between record-breaking turnout, high levels of self-reported enthusiasm among Democratic voters, a pandemic that not only changed how elections were done in many states but also forced many to stay home, and a presidential candidate who actively discouraged his supporters from trusting and taking polls, there were lots of ways for errors to occur. When you add the fact that so many elections were close and the Electoral College was ultimately decided by a margin of only around 45,000 votes, it is perhaps unsurprising that polling errors occurred. Unfortunately, we could not prove that the errors were unique to these factors, and the honest answer is that we simply don’t yet know what our confidence should be in preelection polling given the changing political, social and technological conditions.

Given what we’ve learned, where does the industry go from here? What can pollsters learn from 2020?

There are a couple of clear takeaways, some of which were noted in AAPOR’s 2016 report and deserve further emphasis. First, it is very important to be transparent about how polling numbers are being produced. In many cases, it was impossible for us to know how polls were being done and how the results were being adjusted. As Pew Research Center’s own work has shown, only a small fraction of people choose to take surveys when they are asked. As a result, the way pollsters adjust their results has an extremely large impact on the overall numbers that are reported. But it is nearly impossible for consumers of polls to know how the pollster has chosen to adjust their poll and what the effects of those decisions might be. For example, is the pollster making an assumption about how many Democrats and Republicans are going to vote? If so, why is that an appropriate assumption and how much do the results change if the pollster is wrong in that assumption? Topline numbers are not enough; it is important to communicate what was done to produce those results.

Second, it is important to keep the precision of preelection polls in proper context, and this is not always made clear to those who consume polls. Polls typically rely on a margin of error when reporting their results, but we found many polls whose results were reported without such a statement. Even when a margin of error was reported, it was often not discussed or interpreted in ways that made it clear that the margin of error is not the same as the amount of overall polling error. There are several other sources of potential error in preelection polling, too.

This is important because the presentation and discussion of preelection poll results can create the impression that the results are far more precise than they actually are. While experienced pollsters know how to use a margin of error to determine whether a polling margin is statistically distinguishable from a tie, this is not often made explicit to readers and consumers – likely prompting many to think that the margin of error describes the total amount of possible polling error.

This is the second presidential election in a row where polling’s accuracy has been called into question. Do you have suggestions for how we in the industry can reassure the public that polling is not broken?

Great question. I think it is important to emphasize that preelection polling is only one type of polling and that the problems faced by this particular kind of survey work are not necessarily the same as those facing other types of polling. Preelection polling is uniquely difficult: Not only do pollsters need to contact voters, but they also need to determine (or estimate) who is actually going to vote. Moreover, many of the most high-profile elections are closely contested, so a few points of error could result in mischaracterizing the race. In many respects, the large polling errors we found in 2020 seem remarkably small given the difficulty of the task that preelection pollsters faced.

“Preelection polling is uniquely difficult: Not only do pollsters need to contact voters, but they also need to determine (or estimate) who is actually going to vote.”

Politically oriented polls that are not preelection polls are easier because they face fewer challenges. If a pollster is doing a poll of all adults in a state, for example, the pollster knows what the population of all adults should look like demographically. Unlike preelection polls, where the demographics of the electorate is unknown and needs to be estimated by the pollster, the task is easier for general public polls about important political issues because the characteristics of the population in a state or country are known. 

In addition, as an excellent Pew Research Center report recently made clear, the stakes of normal polling about political issues are different. A 2 percentage point error in the Democratic-Republican margin can have a huge consequence for characterizing the outcome of an election, but the impact on characterizing public support for a public policy may be less because the stakes are so different. The difference between 51% and 49% in a two-party candidate race is the difference between winning and losing, but the same difference for a question about support for a particular policy is less consequential because what we are trying to learn is less precise. Such polls are often most interested in knowing whether public support is divided or united, not the precise levels of support.   

At the end of the day, it is important to place preelection results in the proper context. It is easy to confuse quantification with precision – especially nowadays, given the prominence of data and data-driven stories. But it is always important to understand what was done to collect and produce the results so poll consumers can be better informed.

Scott Keeter  is a senior survey advisor at Pew Research Center.