With the 2014 election in the rearview mirror, public opinion researchers are taking stock of what was learned from new methodologies employed during the election season.

In July, The New York Times and CBS News announced they would begin including online survey panels from UK-based research firm YouGov as part of their election polling efforts.

Scott Keeter, Pew Research Center’s director of survey research

Efforts like these are becoming more common. Other research organizations, including the Pew Research Center, are working to broaden experiments aimed at dealing with the problems confronting traditional probability-based polls, such as the growing difficulty of contacting respondents and then gaining their cooperation. Another goal is to explore ways of taking advantage of opportunities provided by new technologies.

We checked in with Scott Keeter, Pew Research Center’s director of survey research, about these experiments and asked him to explain what they mean for the future of the field of survey research.

In addition to The New York Times/CBS News/YouGov effort, what other interesting methodological approaches did you see during the 2014 election season?

The New York Times/CBS News/YouGov collaboration was the most visible, but other organizations were quietly testing similar approaches. One other trend was the increase in the number of pollsters using registered voter lists, rather than the traditional random digit dialing, to obtain their samples. Although voter list sampling has been around for a long time, the quality of the voter databases has improved in the past few years, making them more attractive as sample sources. Nevertheless, regardless of methodology, many polls underestimated the size of the Republican victory this year, in contrast to 2012 when polls had the opposite problem: they tended to underestimate the performance of Democratic candidates.

In July you told us the center has embarked on a program of research and experimentation in survey methods. Can you tell us more about that?

For the better part of a year, we’ve been conducting a multi-faceted investigation of old and new methods to better understand how we can respond to the growing challenges and opportunities facing the survey world. This investigation covers three broad areas. The first is how we can keep our core methodology – principally random digit dial telephone surveys – up to date and accurate. The second is how a nationally representative research panel of adults can serve our research needs. The third is whether, and under what conditions, surveys based on “non-probability” samples – those gathered without using random sampling – can provide accurate data on the topics we want to study. In addition, we’ve been developing our capabilities with an assortment of data collection tools and techniques, such as mail surveys, apps for mobile devices and the use of text messaging as a means of communicating with respondents.

What is “non-probability” sampling?

Non-probability samples are those selected in such a way that we cannot estimate the chance (or probability) that any given individual in the sample was included. An example would be a sample comprised of people who volunteered to participate in surveys in exchange for airline miles or other rewards. We don’t know if everyone in the population of interest had a chance to volunteer, or would even be interested in volunteering if they had the chance. Such samples have important limitations, including the fact that we can’t compute a traditional margin of sampling error in the same way we do with probability samples. Understanding how accurate a given finding might be is essential to being able to assess its importance.

With this kind of limitation, why would we use these kinds of samples?

The principal reason is cost. Non-probability samples are much cheaper than probability samples. Of course, if the quality of the data is very low, the fact that it’s cheap doesn’t do us much good. But a growing number of respected researchers believe that there are ways to improve the quality of the data, and that there are circumstances under which these kinds of samples can provide useful information. For example, they are being used in conjunction with probability samples to help provide larger numbers of respondents from certain rare populations, such as pregnant women or people in unusual occupations.

How are you going about this research?

We’re working with a range of research organizations in the survey community to experiment with new kinds of methods and data that can augment our work. The biggest effort currently underway is a collaboration with SurveyMonkey and Westat.

Westat is one of the preeminent survey organizations in the world. It conducts numerous large-scale, complex, policy-related studies for the federal government and other clients. Although they are strongly committed to the use of probability sampling methods, they believe the time is right to examine what non-probability methods can offer.

SurveyMonkey is the world’s most widely used online survey software, with more than 2.7 million surveys taken each day. At the end of selected surveys, respondents are invited to take additional surveys. Their respondent pool is well suited to our initial experimentation because of the diversity of people who take surveys on their platform. Participants drawn from that pool aren’t paid to take these surveys; instead they are offered the opportunity to donate a small amount of money to a charity they choose from a list. Although SurveyMonkey uses non-probability sampling methods, they are interested in understanding and reducing potential biases and are willing to share the fruits of this effort with the research community.

In our collaboration, each of the three organizations involved in this work has conducted a survey drawing on a common core of measures. SurveyMonkey conducted a survey of its non-probability panel, Westat conducted a survey with a probability sample of U.S. households selected using address-based sampling and the Pew Research Center conducted a survey with its probability-based American Trends Panel. Additional comparisons are available from Pew Research Center telephone surveys.

How – and when – will you draw your conclusions?

The first step in this work is to compare estimates across the different sample sources. A second is to examine the effectiveness of different statistical approaches used to improve the estimates from the non-probability sample.

We don’t yet have a timetable, but I expect the first reports to be issued this winter or early spring, and we hope to present some of the findings at the American Association for Public Opinion Research conference in May 2015.

What does success look like in experimental efforts like these?

Our short-term goal is to better understand the variation in results we get from alternative methods and what features of those methods affect the results. As we gain this understanding, we may be able to identify ways for these methods to supplement our current approaches. In the longer term, success would mean that we find ways to use these alternative methodologies interchangeably with the standard methods we currently rely on. But it is worth stressing that while we hope that can happen, we will let the evidence speak for itself.

Drew DeSilver  is a senior writer at Pew Research Center.