U.S. Survey Research

Collecting survey data

Survey researchers employ a variety of techniques in the collection of survey data. People can be contacted and surveyed using several different modes: by an interviewer in-person or on the telephone (either a landline or cellphone), via the internet or by paper questionnaires (delivered in person or in the mail).

The choice of mode can affect who can be interviewed in the survey, the availability of an effective way to sample people in the population, how people can be contacted and selected to be respondents, and who responds to the survey. In addition, factors related to the mode, such as the presence of an interviewer and whether information is communicated aurally or visually, can influence how people respond. Surveyors are increasingly conducting mixed-mode surveys where respondents are contacted and interviewed using a variety of modes.

Survey response rates can vary for each mode and are affected by aspects of the survey design (e.g., number of calls/contacts, length of field period, use of incentives, survey length, etc.). In recent years surveyors have been faced with declining response rates for most surveys, which we discuss in more detail in the section on the problem of declining response rates.

In addition to landline and cellphone surveys, Pew Research Center also conducts web surveys and mixed-mode surveys, where people can be surveyed by more than one mode. We discuss these types of surveys in the following sections and provide examples from polls that used each method. In addition, some of our surveys involve reinterviewing people we have previously surveyed to see if their attitudes or behaviors have changed. For example, in presidential election years we often interview voters, who were first surveyed earlier in the fall, again after the election in order to understand how their opinions may have changed from when they were interviewed previously.

Cellphone surveys

Telephone surveys have traditionally been conducted only by landline telephone. However, now that almost one-in-four Americans have a cellphone but no landline telephone service, more surveys are including interviews with people on their cellphones. For certain subgroups, such as young adults, Hispanics and African Americans, the cell only rate is even higher. Research has shown that as the number of adults who are cell only has grown, the potential for bias in landline surveys that do not include cellphone interviews is growing.

Growth in the Cell-Only Population by Demos-Jan15 update

 

Cellphone surveys are conducted in conjunction with a landline survey to improve coverage. The data are then combined for analysis. In addition to the issues associated with sampling cellphones, there are also unique challenges that arise when interviewing people on their cellphones.

One of the most important considerations when conducting cellphone surveys is that the costs are substantially higher than for a traditional landline survey. The cost of a completed cellphone interview is one-and-a-half to two times more than a completed landline interview. Although some of the fixed costs associated with landline surveys are not duplicated when a cellphone sample is added (such as programming the questionnaire), other costs are higher (data processing and weighting are more complex in dual-frame surveys).

Cellphone surveys are more expensive because of the additional effort needed to screen for eligible respondents. A significant number of people reached on a cellphone are under the age of 18 and thus are not eligible for most of our surveys of adults. Cellphone surveys also cost more because federal regulations require cellphone numbers to be dialed manually (whereas auto-dialers can be used to dial landline numbers before calls are transferred to interviewers). In addition, respondents (including those to Pew Research surveys) are often offered small cash reimbursements to help offset any costs they might incur for completing the survey on their cellphone. These payments, as well as the additional time necessary for interviewers to collect contact information in order to reimburse respondents, add to the cost of conducting cellphone surveys.

Most cellphones also have caller identification or other screening devices that allow people to see the number that is calling before deciding to answer. People also differ considerably in how they use their cellphones (e.g., whether they are turned on all the time or used only during work hours or for emergencies). The respondents’ environment also can have a greater influence on cellphone surveys. Although people responding to landline surveys are generally at home, cellphone respondents can be virtually anywhere when receiving the call. Legal restrictions on the use of cellphones while driving, as well as concerns about safety, also have raised the issue of whether people should be responding to surveys on their cellphones while driving. In addition, people often talk on their cellphones in more open places where they may have less privacy; this may affect how they respond to survey questions, especially those that cover more sensitive topics. These concerns have led some surveyors (including Pew Research Center) to ask cellphone respondents whether they are in a safe place and whether they can speak freely before continuing with the interview. Lastly, the quality of connection may influence whether an interview can be completed at that time, and interruptions may be more common on cellphones.

Response rates are typically lower for cellphone surveys than for landline surveys. In terms of data quality, some researchers have suggested that respondents may be more distracted during a cellphone interview, but our research has not found substantive differences in the quality of responses between landline and cellphone interviews. Interviewer ratings of respondent cooperation and levels of distraction have been similar in the cell and landline samples, with cellphone respondents sometimes demonstrating even slightly greater cooperation and less distraction than landline respondents.

Related publications

Internet surveys

The number of surveys being conducted over the internet has increased dramatically in the last 10 years, driven by a dramatic rise in internet penetration and the relatively low cost of conducting web surveys in comparison with other methods. Web surveys have a number of advantages over other modes of interview. They are convenient for respondents to take on their own time and at their own pace. The lack of an interviewer means web surveys suffer from less social desirability bias than interviewer-administered modes. And web surveys also allow researchers to use a host of multimedia elements, such as having respondents view videos or listen to audio clips, which are not available to other survey modes.

Although more surveys are being conducted via the web, internet surveys are not without their drawbacks. Surveys of the general population that rely only on the internet can be subject to significant biases resulting from undercoverage and nonresponse. Not everyone in the U.S. has access to the internet, and there are significant demographic differences between those who do have access and those who do not. People with lower incomes, less education, living in rural areas or ages 65 and older are underrepresented among internet users and those with high-speed internet access (see our internet research for the latest trends).

There also is no systematic way to collect a traditional probability sample of the general population using the internet. There is no national list of email addresses from which people could be sampled, and there is no standard convention for email addresses, as there is for phone numbers, that would allow random sampling. Internet surveys of the general public must thus first contact people by another method, such as through the mail or by phone, and ask them to complete the survey online.

Because of these limitations, researchers use two main strategies for surveying the general population using the internet. One strategy is to randomly sample and contact people using another mode (mail, telephone or face-to-face) and ask them to complete a survey on the web. Some of the surveys may allow respondents to complete the survey by a variety of modes and therefore potentially avoid the undercoverage problem created by the fact that not everyone has access to the web. This method is used for one-time surveys and for creating survey panels where all or a portion of the panelists take surveys via the web (such as the GfK KnowledgePanel and more recently the Pew Research Center’s American Trends Panel). Contacting respondents using probability-based sampling via another mode allows surveyors to estimate a margin of error for the survey (see probability and non-probability sampling for more information).

Pew Research Center also has conducted internet surveys of random samples of elite and special populations, where a list of the population exists and can be used to draw a random sample. Then, the sampled persons are asked to complete the survey online or by other modes. For example, see the scientist survey reported in “Public and Scientists’ Views on Science and Society.”

Another internet survey strategy relies on convenience samples of internet users. Researchers use one-time surveys that invite participation from whoever sees the survey invitation online, or rely on panels of respondents who opt-in or volunteer to participate in the panel. These surveys are subject to the same limitations facing other surveys using non-probability-based samples: The relationship between the sample and the population is unknown, so there is no theoretical basis for computing or reporting a margin of sampling error and thus for estimating how representative the sample is of the population as a whole. (Also see the American Association for Public Opinion Research’s (AAPOR) Non-Probability Sampling Task Force Report and the AAPOR report on Opt-In Surveys and Margin of Error.) Many organizations are now experimenting with non-probability sampling in hopes of overcoming some of the traditional limitations these methods have faced. One example of this is sample matching, where a non-probability sample is drawn with similar characteristics to a target probability-based sample, and the former uses the selection probabilities of the latter to weight the final data. Another example is sample blending, whereby probability-based samples are combined with non-probability samples using specialized weighting techniques to blend the two.  Here at the Pew Research Center we are closely following experiments with these methodologies, and conducting some of our own, to better understand the strengths and weaknesses of varying approaches.

Related publications

The problem of declining response rates

As Americans are now faced with more demands on their time, they are exercising more choice over when and how they can be contacted. The growth in the number of unsolicited telephone calls has also resulted in people employing more sophisticated technology for screening their calls (e.g., voice mail, caller identification, call blocking and privacy managers). This has resulted in fewer people participating in telephone polls than was the case when telephone surveys first became prevalent. As a consequence, response rates have continued to decline over the past decade.

Pew Research Center has conducted several survey experiments to gauge the effects of respondent cooperation on the validity of the results. These experiments compare responses from a standard survey, conducted with commonly utilized polling procedures over a five-day field period, with a survey conducted over a much longer period that employed more rigorous techniques aimed at obtaining a higher response rate and interviewing more difficult to reach respondents.

Findings from the 2012 study “Assessing the Representativeness of Public Opinion Surveys,” the 2003 study “Polls Face Growing Resistance, But Still Representative” and the 1997 study “Conservative Opinions Not Underestimated, But Racial Hostility Missed” indicate that carefully conducted polls continue to obtain representative samples of the public and provide accurate data about the views and experiences of Americans. These results are also reported in Public Opinion Quarterly.

Related publications

Mixed-mode surveys

Over the past decade, there has been a rise in mixed-mode surveys where multiple modes are used to contact and survey respondents. The increase in mixed-mode surveys has been driven by several factors, including declining response rates, coverage problems in single-mode surveys and the development of web surveys. Because there are now a variety of methods available, survey researchers can determine the best mode or combination of modes to fit the needs of each particular study and the population to be surveyed. However, when multiple modes are used for data collection, factors related to each mode, such as the presence of an interviewer and whether information is communicated aurally or visually, may affect how people respond.

Although Pew Research Center primarily conducts telephone surveys, we also occasionally conduct mixed-mode surveys, where people are surveyed by more than one mode. For example, we have conducted mixed-mode surveys of foreign policy experts and journalists, where respondents can complete the survey via the web or by telephone.

Related publications

Reinterviews

Reinterviews are typically used to examine whether individuals have changed their opinions, behaviors or circumstances (such as employment, health status or income) over time. Survey designs that include reinterviews are sometimes called panel surveys. The key feature of this survey design is that the same individuals who were interviewed at the time of the first survey are interviewed again at a later date. Pew Research Center sometimes conducts reinterviews, especially to learn more about whether and how voters’ opinions change during the course of a presidential election campaign. For an example from the 2012 presidential campaign, see “Low Marks for the 2012 Campaign.” For an example comparing foreign policy opinions before and after the events of Sept. 11, 2001, see “America’s New Internationalist Point of View.”

Some of the reports listed below used reinterviews primarily to ask follow-up questions about respondents’ opinions rather than to analyze opinion change on the same issues. Survey reports of this sort include “Beyond Red vs. Blue” and “Voters Like Campaign 2004, But Too Much ‘Mud-Slinging.'”

Related publications

Panel surveys

A survey panel is a sample of respondents who have agreed to take part in multiple surveys over time. Pew Research Center has used panels on a number of occasions and now has its own nationally representative survey panel known as the American Trends Panel.

Panels have several advantages over alternative methods of collecting survey data. Perhaps the most familiar use of panels is to track change in attitudes or behaviors of the same individuals over time. Whereas independent samples can yield evidence about change, it is more difficult to estimate exactly how much change is occurring – and among whom it is occurring – without being able to track the same individuals at two or more points in time.

A second advantage of a panel is that considerable information about the panelists can be accumulated over time. Because panelists may respond to multiple surveys on different topics, it is possible to build a much richer portrait of the respondents than is feasible in a single survey interview, which must be limited in length to prevent respondent fatigue.

Related to this is another advantage. Additional identifying information about respondents (such as an address) is often obtained for panelists, and this information can be used to help match externally available data, such as voting history, to the respondents. The information necessary to make an accurate match is often somewhat sensitive and difficult to obtain from respondents in a one-time interview.

A fourth advantage is that panels can provide a relatively efficient method of data collection compared with fresh samples because the participants have already agreed to take part in more surveys. The major effort required with a fresh sample – making an initial contact, persuading respondents to take part and gathering the necessary demographic information for weighing – is not needed once a respondent has joined a panel.

A fifth advantage is that it may be possible to survey members of a panel using different interviewing modes at different points in time. Contact information can be gathered from panelists (e.g., mailing addresses or email addresses) and used to facilitate a different interview mode than the original one, or to contact respondents in different ways to encourage participation.

But panels have some limitations as well. They can be expensive to create and maintain, requiring more extensive technical skill and oversight than a single-shot survey. A second concern is that repeated questioning of the same individuals may yield different results than we would obtain with independent or “fresh” samples. If the same questions are asked repeatedly, respondents may remember their answers and feel some pressure to be consistent over time. Respondents might change their behavior because of questions you’ve asked; for example, questions about voting might spur them to register to vote. Respondents also become more skilled at answering particular kinds of questions. This may be beneficial in some instances, but to the extent it occurs, the panel results may be different from what would have been obtained from independent samples of people who have not had the practice in responding to surveys. A final disadvantage is that panelists may drop out over time, making the panel less representative of the target population as time passes if the kinds of people who drop out are different from those who tend to remain. For example, young people may move more frequently and thus be more likely to be lost to the panel when they move.

Probability and non-probability panels

Survey panels comprise many different types of samples. A fundamental distinction is between panels built with probability samples and those built with non-probability, or “opt-in” samples (click here for a discussion of what makes a probability sample).

Among both types of survey panels, the samples may be intended to represent the entire population or only a portion of it. Pew Research Center’s American Trends Panel (described below) is a nationally representative probability sample of U.S. adults. Another nationally representative panel is GfK’s KnowledgePanel. An example of a panel representing a subgroup of the population is the National Longitudinal Survey of Youth 1979. It is a nationally representative sample of young men and women who were 14-22 years old when they were first surveyed in 1979. This panel is a product of the U.S. Bureau of Labor Statistics.

There are numerous non-probability or opt-in panels in operation. The methods used to build the samples for these panels differ, but in most cases the panelists have volunteered to join the panel and take surveys in exchange for some type of modest reward, either for themselves or for a charity. These panels tend to be used more for market research than for opinion and policy research, but the distinction is not a sharp one. Two well-known opt-in panels are YouGov and SurveyMonkey Audience. In addition to the debate surrounding non-probability samples, opt-in panels are typically limited to people who already have internet access, and thus do not represent the entire population of the U.S.

The American Trends Panel

The American Trends Panel (ATP), created by Pew Research Center, is a nationally representative panel of randomly selected U.S. adults living in households. Respondents who self-identify as internet users (representing 89% of U.S. adults) participate in the panel via monthly self-administered web surveys, and those who do not use the internet participate via telephone or mail. The panel is being managed by Abt SRBI.

All current members of the American Trends Panel were originally recruited from the 2014 political polarization and typology survey, a large (n=10,013) national landline and cellphone random digit dial (RDD) survey conducted Jan. 23 to March 16, 2014, in English and Spanish. At the end of that survey, respondents were invited to join the panel. The invitation was extended to all respondents who use the internet (from any location) and a random subsample of respondents who do not use the internet.

Of the 10,013 adults interviewed, 9,809 were invited to take part in the panel. A total of 5,338 agreed to participate and provided either a mailing address or an email address to which a welcome packet, a monetary incentive and future survey invitations could be sent. Panelists also receive a small monetary incentive after participating in each wave of the survey.

Eight waves of interviews were conducted with the panel during 2014, with 4,265 panelists taking part in at least one survey. The average sample size for a wave during 2014 was about 3,200. Response rates across the waves varied within a relatively narrow range. Across the first six waves, between 60% and 65% of web-enabled panelists took part in each wave, and between 60% and 70% of the non-web group did so (by phone or by mail). After panelists who had never participated were purged, web response rates for the last two surveys were somewhat higher. Taking account of the response rate for the 2014 survey of political polarization from which the panelists were recruited (10.6%) the agreement rate to join the panel (54.4%) and the response rate to a given wave, the cumulative response rate for a typical wave is about 3.5%.

The ATP data were weighted in a multi-step process that begins with a base weight incorporating the respondents’ original survey selection probability and the fact that some panelists were subsampled for invitation to the panel. Next, an adjustment was made for the fact that the propensity to join the panel varied across different groups in the sample. The final step in the weighting uses an iterative technique that matches gender, age, education, race, Hispanic origin, telephone service, population density and region to parameters from the National Health Interview Survey, the 2010 decennial census and the 2012 American Community Survey. It also adjusts for party affiliation using an average of the three most recent Pew Research Center general public telephone surveys, and for internet use using as a parameter a measure from the 2014 political polarization survey. Sampling errors and statistical tests of significance take into account the effect of weighting. The Hispanic sample in the American Trends Panel is predominantly native born and English speaking. In addition to sampling error, one should bear in mind that question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of opinion polls.

Related publications