Even as the polling industry tries to recover from real and perceived misses in U.S. and European elections in recent years, new studies have provided reassuring news for survey practitioners about the health of polling methodology.

In this Q&A, Michael Dimock, president of Pew Research Center, talks about recent developments in public opinion polling and what lies ahead.

There’s a widespread feeling that polling failed to predict the 2016 election results. Do you agree?

President Trump’s victory certainly caught many people by surprise, and I faced more than one Hillary Clinton supporter who felt personally betrayed by polling. But the extent to which the expectation of a Clinton victory was based on flawed polling data – or incorrect interpretation of polling data – is a big part of this question.

Polling’s professional organization, the American Association for Public Opinion Research (AAPOR), spent the last several months looking at the raw data behind the pre-election polls in an attempt to answer this question. (Note: The leader of the AAPOR committee tasked with this inquiry is Pew Research Center’s director of survey research, Courtney Kennedy.) While it might surprise some people, the expert analysis found that national polling in 2016 was very accurate by historical standards. When the national polls were aggregated, or pulled together, they showed Clinton winning among likely voters by an average of 3.2 percentage points. She ended up winning the popular vote by 2.1 points – a relatively close call when looking at presidential polling historically, and significantly closer than what polls suggested in 2012.

Of course, as we all well know, the president isn’t chosen by the national popular vote, but by the Electoral College – so it is the state poll results, rather than the nationwide surveys, that were particularly relevant for those trying to project election results. And the state polls, according to the report, “had a historically bad year.” In particular, in several key Midwestern states with a history of voting Democratic at the presidential level – including Wisconsin, Michigan and Pennsylvania – Clinton was narrowly ahead in pre-election polls, only to be edged out on Election Day. So what happened?

The AAPOR committee report offers at least two factors that were at play. First, data suggests that a number of voters made up their minds in the final days of the campaign. And those late deciders broke for Trump by a significant margin. In the battleground of Wisconsin, for example, 14% of voters told exit pollsters that they had made up their minds only in the final week; they ultimately favored Trump by nearly two-to-one. Yet nearly all of the polling that drove expectations of a Clinton victory in Wisconsin was conducted before the final week of the campaign, missing this late swing of support.

Secondly, unlike in other recent elections, there was a stark education divide in candidate support that some state-level polls missed. Pollsters have long talked about the importance of gender, religion, race and ethnicity as strong correlates of voter preference. Last year, education was also a strong correlate. A number of pre-election polls that didn’t account for it – by adjusting, or “weighting,” their samples to better reflect the full population – were off if they had too many highly educated voters, who tended to vote for Clinton, and too few low-education voters, who tended to vote for Trump.

What does that all mean? Well, polls may have been an accurate representation of voter preferences at the time they were taken. But preferences can change, particularly in a fast-moving campaign. Combine that with an education gap that wasn’t apparent in other recent elections – and wasn’t reflected in some state-level surveys – and you can see why some of those state polls did a poor job of projecting the ultimate outcome. The key for survey practitioners is that both these types of errors can be addressed by known methods.

If polls can get election outcomes wrong, doesn’t that mean polling in general is unreliable?

No. There are important differences between election polling and other kinds of survey work.

Forecasting elections doesn’t just involve asking people whether they support candidate A or candidate B. It also involves trying to determine whether respondents will act on their preferences by casting a ballot at all. And that extra step of identifying “likely voters” has long been one of the most challenging things for pollsters to do. Survey respondents generally do a better job of telling you what they think than what they are going to do, especially when it comes to voting. It is this extra step – where a lot of assumptions about factors associated with turnout need to be made – that is quite distinct from the principles of random sampling and good question design that make survey research valid and reliable.

Another wrinkle when it comes to election polling is that we’re now in an era when the aggregation of polls emphasizes their use specifically as a forecasting tool, and asserts degrees of certainty to those forecasts. This is much like the weatherman using a variety of tools to forecast the weather. But I don’t think this is the primary, or even a good, use of polling.

In fact, most survey work is not engaged in election forecasting. Instead, it’s meant to get beyond the surface and into people’s heads – to truly understand and explain their values, beliefs, priorities and concerns on the major issues of the day. These kinds of surveys are aimed at representing all citizens, including those who might not vote, write to their member of Congress or otherwise participate in the political process.

After each election, there is a tendency for the winning candidate to claim a mandate and point to the results as evidence of the public’s will. But given that so many citizens don’t vote, and that many who do vote don’t like either of the options before them, elections aren’t necessarily reflective of the will of all people. A deep, thoughtful survey can help address this disconnect by presenting the voice of the public on any number of issues.

Let’s talk about response rates. In recent decades, fewer people have been responding to telephone surveys. Does that mean poll results are becoming less accurate?

We spend a lot of time worrying about declining response rates. There is no doubt that the share of Americans who respond to randomized telephone surveys is low and has fallen over time – from 36% of those called in 1997 to 9% in 2016. A low response rate does signal that poll consumers should be aware of the potential for “nonresponse bias” – that is, the possibility that those who didn’t respond may be significantly different from those who did participate in the survey.

But a Pew Research Center report released last month shows that survey participation rates have stabilized over the past several years and that the negative consequences of low response rates for telephone survey accuracy and reliability are limited. In particular, there’s no evidence that Democrats or Republicans are systematically more or less likely to participate in telephone surveys.

There’s also a misperception that the rise of mobile phones is a problem for survey accuracy, but it’s not. To be sure, mobile phones are a factor: It’s now estimated that roughly 95% of U.S. adults own a mobile phone, and more than half live in households that have no landline phone at all.

To meet this reality, the Center conducts the majority of interviews – 75% – via mobile phones. And we’ve actually found that it has improved the representativeness of our surveys by improving our ability to reach lower-income, younger and city-dwelling people – all of whom are more likely to be mobile-only.

Has “big data” reduced the relevance of polling, or will it in the future?

Calling a sample of 1,000 to 1,500 adults may seem quaint in the new world of big data. Why even collect survey data when so much information already exists in the digital traces we leave behind as part of our daily lives?

While it’s possible that some of the more straightforward tasks that polling had traditionally provided – like tracking candidate popularity, consumer confidence or even specific brand images – might eventually be tracked by algorithms crunching massive public databases, for the foreseeable future getting beyond the “what” to the “why” of human behavior and beliefs requires asking people questions – through surveys – to understand what they are thinking. Big data can only tell you so much.

And the existence of big data doesn’t equal access to data. While researchers could potentially learn a lot about Americans’ online, travel, financial or media consumption behaviors, much of this data is private or proprietary, as well as fragmented, and we don’t yet have the norms or structure to access it or to make datasets “talk” to other sources.

So rather than feeling threatened by big data, we see it as a huge opportunity and have made some big investments in learning more. We’re particularly interested in work that tries to marry survey research to big data analytics to improve samples, reach important subpopulations, augment survey questions with concrete behaviors and track changes in survey respondents over time. The future of polling will certainly be shaped by – but probably not replaced by – the big data revolution.

What is the future of polling at Pew Research Center?

We were founded by one of the giants of the field – Andy Kohut – and have built much of our reputation on the quality of our telephone polls. But we are also an organization that has never rested on its laurels. We spend a lot of time and money taking a hard look at our methods to make sure they remain accurate and meaningful.

I remain confident that telephone surveys still work as a methodology, and the Center will keep using them as a key part of our data collection tool kit. And I’m particularly proud that the Center is at the forefront of efforts to provide the data to test their reliability and validity, with no punches pulled.

But we aren’t stopping there. For example, we now have a probability-based online poll that accounts for about 40% of our domestic surveys. We tap into lots of government databases in the U.S. and internationally for demographic research. We’ve got a Data Labs team that is experimenting with web scraping and machine learning. We’ve used Google Trends, Gnip and Twitter, ComScore, Parse.ly, and our own custom data aggregations to ask different kinds of questions about public behaviors and communication streams than we ever could with polls alone.

All in all, this is an interesting time to be a social scientist. There are lots of big changes in American politics, global relationships, media and technology. Our methods will continue to change and evolve in response. At the end of the day, though, our obligation remains the same: to gather the public’s opinions reliably and respectfully, to analyze and assess what people tell us with the utmost care, and to share what we learn with both transparency and humility.

John Gramlich  is an associate director at Pew Research Center.