U.S. Survey Research
For most of our national surveys of the general public, we conduct telephone surveys using a random digit sample of landline and cellphone numbers in the continental United States. Some of our surveys include additional, larger samples of subgroups, such as African Americans or young people (these are called “oversamples”). We also occasionally conduct surveys of people in particular states or regions, where our sample is limited to residents of these areas. Pew Research Center also conducts international surveys that involve sampling and interviewing people in multiple countries. Lastly, we sometimes survey special populations, such as foreign policy experts, scientists or journalists. In all of our surveys, we use probability sampling to help ensure adequate representation of the groups we survey.
Probability and non-probability sampling
A sample is sometimes described as a model of the population – a smaller version of the larger whole. The goal in sampling is to use a small number of objects, usually people, to represent the larger group from which they are drawn. Most of the surveys at the Pew Research Center entail samples designed to represent the entire adult population of the U.S. or another country.
There are two broad ways to draw a sample for a survey: probability sampling and non-probability sampling. Most samples used at Pew Research Center are probability (also called random) samples, so-called because nearly every person in the population of interest has a known, and non-zero chance of being selected for the sample. By contrast, non-probability samples, by definition, are drawn in such a way that it is impossible to assign a probability of selection to the members of the population.
At the heart of this difference lies the critical advantage of probability sampling: It permits us to calculate how likely it is that a given sample differs from the population on any question of interest, and by how much. These calculations are called the margin of sampling error and the confidence level. For example, on a typical telephone survey of 1,500 members of the U.S. adult population, the margin of sampling error is plus or minus 2.9 percentage points at the 95 percent level of confidence.
Non-probability sampling does not permit the computation of a margin of sampling error in the same way that probability sampling does. As a result, there is much greater uncertainty about the accuracy of results from such samples. But they may have some advantages over probability samples for some purposes that offset this weakness. This section describes how both kinds of samples work and how they may be used.
Probability sampling in action
A majority of Pew Research Center surveys are conducted among the U.S. general public by telephone using a sampling method known as random digit dialing or “RDD.” This method ensures that all telephone numbers in the U.S – whether landline or cellphone – have a known chance of being included. As a result, samples based on RDD should be unbiased, and a margin of sampling error and a confidence level can be computed for them.
Other populations of interest are sampled using different methods. For many elite populations, such as scientists or foreign policy experts, a list with email addresses may be available. A random sample can be selected from such a list and invitations to participate sent via email. To the extent that the list adequately covers the population of interest, this method also theoretically results in an unbiased sample for which a margin of sampling error can be computed.
Many of our international surveys are conducted face-to-face in people’s homes. The samples are selected using a version of area probability sampling, in which every dwelling unit in a country has a known chance of inclusion. (Read more in the international section.)
The margin of error for a sample is dependent on four main factors: the size of the sample; the variability of the item being measured; the effect of weighting and the sample design (captured by the design effect); and the proportion of the total population being sampled. Of these, the size of the sample is by far the most important. The margin of error declines as the size of the sample gets larger, but the relationship is not linear. Because of how the margin of sampling error is calculated, decreases in the margin of error get smaller as the sample grows. This is why so many polls have samples of around 1,000 or so. Adding another 1,000 interviews for a total of 2,000 reduces the margin of error by only about 1 percentage point.
We report a margin of sampling error for the total sample for each survey and sometimes for key subgroups (e.g., registered voters, Democrats, Republicans, etc.). For example, the sampling error for a typical Pew Research Center national survey of 1,500 completed interviews is plus or minus approximately 3 percentage points with a 95% confidence interval. This means that in 95 out of every 100 samples of the same size and type, the results we would obtain will vary by no more than plus or minus 3 percentage points from the result we would get if we could interview every member of the population. Thus, the chances are very high (95 out of 100) that any sample we draw will be within 3 points of the true population value.
It is important to keep in mind that sampling error is only one of many potential sources of error in surveys.
A non-probability sample is one in which it is impossible to determine the chance that any individual in the population was selected. Lacking this information, we are uncertain as to how well the sample represents the population, and thus how important a given finding based on such a sample actually is. But the more convenient and less costly nature of such samples may nevertheless be useful in cases where the ability to generalize to the population with a known degree of accuracy may be less important. Survey research always entails a balance of considerations among costs, speed and data quality.
There are many kinds of non-probability samples. As the AAPOR Task Force on Non-Probability Sampling noted in its 2013 report, “Unlike probability sampling, there is no single framework that adequately encompasses all of non-probability sampling. Non-probability sampling is a collection of methods and it is difficult if not impossible to ascribe properties that apply to all non-probability sampling methodologies.”
Non-probability methods are used extensively in marketing research studies (mall intercept studies, opt-in panels), certain kinds of health research (e.g., “case-control” studies and clinical trials), policy evaluation studies and a growing number of political polls. In 2014, The New York Times and CBS News used online non-probability survey panels from UK-based research firm YouGov as part of their election polling efforts, and NBC News and SurveyMonkey have been conducting occasional joint polls of this variety.
In addition, nearly all survey researchers employ non-probability samples in certain circumstances. Most qualitative research relies on convenience samples (so-called for the ease of finding and recruiting participants) or purposive samples (where researchers attempt to structure the sample to reflect certain characteristics in the population, such as a balance of age or racial groups). One type of qualitative research commonly used in survey research is the focus group, in which a small number of individuals are brought together to discuss selected topics. These usually employ some type of convenience or purposive sampling rather than probability sampling because the precision and accuracy of the latter is not needed.
Non-probability samples are also commonly used in experiments, where the ability to generalize to the population with a known degree of accuracy is less important than the ability to measure the impact of an experimental treatment or condition (such as the use of certain words in a survey question, or the effect of a video on respondents compared with no video). Detecting the effects of an experimental treatment can be easier with larger samples, and non-probability panels provide a relatively inexpensive source of respondents who can be assigned to different experimental conditions.
Much of the controversy about non-probability sampling focuses on opt-in panels. The term “opt-in” refers to the fact that participants can volunteer to be a part of the panel, or are recruited from a variety of sources that collectively do not constitute the entire population of interest. Panelists are incentivized to join and participate using points, prizes, cash or contributions to charity. Once recruited, participants typically complete one or more surveys that collect demographic and other information about them, which can be used in subsequent surveys to select them for inclusion or to weight the results.
Nearly all opt-in panels use some type of statistical modeling to try to correct for biases in the underlying samples (see chapter 6 in the AAPOR Task Force report).
Despite its widespread adoption in marketing research, the use of non-probability sampling to make generalizations to the population is highly controversial among many people in the survey research community. The announcement last year that The New York Times and CBS News would partner with YouGov for some election polling spurred a wide range of reactions, including some that argued the news organizations were abandoning their high standards. Critics argue that opt-in panels cannot be used to make inferences about the characteristics, behaviors and attitudes of the general public because there is no theoretical basis for estimating the accuracy of such samples. A key concern is that the process of adjusting such samples to match the population on key variables of interest requires subjective judgments about the underlying models or the variables to be used for adjustment. The AAPOR Task Force report on non-probability sampling and the commentary it elicited is a good place to find a discussion about the controversy.
Pew Research Center study of non-probability sampling
Pew Research Center is engaged in an ongoing program of research on non-probability sampling to determine what its potential may be for the types of research that we conduct. Our largest effort in this arena is a collaboration with SurveyMonkey and Westat.
In our collaboration, each of the three organizations involved in this work has conducted a survey drawing on a common core of measures. SurveyMonkey conducted a survey of its non-probability panel, Westat conducted a survey with a probability sample of U.S. households selected using address-based sampling and Pew Research Center conducted a survey with its probability-based American Trends Panel. Additional comparisons are available from Pew Research Center telephone surveys.
Preliminary findings from this collaboration will be presented at the annual conference of the American Association for Public Opinion Research in May 2015 and posted on our website.
Random digit dialing
The typical Pew Research Center survey selects a random digit sample of both landline and cellphone numbers in all 50 U.S. states and the District of Columbia. As the proportion of Americans who rely solely or mostly on cellphones for their telephone service continues to grow, sampling both landline and cellphone numbers helps to ensure that our surveys represent all adults who have access to either (only about 3% of households do not have access to any phone). We sample landline and cellphone numbers to yield a combined sample with approximately 25% of the interviews conducted by landline and 75% by cellphone. This ratio is based on an analysis that attempts to balance cost and fieldwork considerations as well as to improve the overall demographic composition of the sample (in terms of age, race/ethnicity and education). This ratio also ensures a minimum number of cell-only respondents in each survey.
The design of the landline sample ensures representation of both listed and unlisted numbers (including those not yet listed) by using random digit dialing. This method uses random generation of the last two digits of telephone numbers selected on the basis of the area code, telephone exchange and bank number. A bank is defined as 100 contiguous telephone numbers, for example 800-555-1200 to 800-555-1299. The telephone exchanges are selected to be proportionally stratified by county and by telephone exchange within the county. That is, the number of telephone numbers randomly sampled from within a given county is proportional to that county’s share of telephone numbers in the U.S. Only banks of telephone numbers containing three or more listed residential numbers are selected.
The cellphone sample is drawn through systematic sampling from dedicated wireless banks of 100 contiguous numbers and shared service banks with no directory-listed landline numbers (to ensure that the cellphone sample does not include banks that are also included in the landline sample). The sample is designed to be representative both geographically and by large and small wireless carriers.
Both the landline and cell samples are released for interviewing in replicates, which are small random samples of the larger sample. Using replicates to control the release of telephone numbers ensures that the complete call procedures are followed for the entire sample. The use of replicates also ensures that the regional distribution of numbers called is appropriate. This also works to increase the representativeness of the sample.
When interviewers reach someone on a landline phone, they randomly ask half the sample if they could speak with “the youngest male, 18 years of age or older, who is now at home” and the other half of the sample to speak with “the youngest female, 18 years of age or older, who is now at home.” If there is no person of the requested gender at home, interviewers ask to speak with the youngest adult of the opposite gender. This method of selecting respondents within each household improves participation among young people, who are often more difficult to interview than older people because of their lifestyles.
Unlike a landline phone, a cellphone is assumed in Pew Research polls to be a personal device. Interviewers ask if the person who answers the cellphone is 18 years of age or older to determine if he or she is eligible to complete the survey. This means that, for those in the cell sample, no effort is made to give other household members a chance to be interviewed. Although some people share cellphones, it is still uncertain whether the benefits of sampling among the users of a shared cellphone outweigh the disadvantages.
Currently, nearly half of Americans have only a cellphone. Because many people can no longer be reached by landline telephone, the representativeness of telephone surveys based only on a random sample of households with landline telephone service has come under increased scrutiny. Many pollsters and survey methodologists, including those at Pew Research Center, are studying how cellphones impact telephone surveying. Public Opinion Quarterly dedicated a special issue to the topic of cellphones in 2007: Cell Phone Numbers and Telephone Surveying in the U.S. Pew Research Center began routinely including a cellphone sample in nearly all of its surveys in 2008.
One of the main challenges of surveying cellphone users is drawing a representative sample of this group. Drawing samples for all telephone surveys is now more complicated because of the introduction of cellphone numbers and number portability (i.e., where people can keep their numbers when they move or change service providers and can port a landline number to a cellphone). Telephone numbers are assigned different prefixes, which can be used to identify whether the number is for a landline or cellphone, but there are also mixed or shared prefixes that include both landline and cell numbers. In addition, people who forward their calls (e.g., from their landline number at home or work to their cell) may appear as a landline number even when they are actually talking on their cellphone.
Most telephone surveys use the household as the sampling unit because landline telephone numbers have typically been shared among all members living in a household. Once a sample of landline telephone numbers is drawn, a separate selection procedure is used to give all adults living in a given household a chance of selection (such as asking for the youngest adult male or female).
However, the situation is more complicated for cellphone users because cellphones are often considered individual rather than shared devices, so the person who answers the phone usually becomes the respondent, whether he or she is the primary user of the phone or shares the cellphone with others. Although some surveyors have experimented with selecting among the users of a shared cellphone, it is still uncertain whether the benefits of this approach outweigh the disadvantages, such as potentially lower response rates. In addition, many people under the age of 18 (and thus not eligible for most national surveys) have cellphones. Substantial time and costs are incurred screening out these ineligible respondents.
Several additional issues arise when identifying the geographic location of a cellphone number. The geographic information that can be derived from cellphone numbers is not as precise as it is for landline telephone numbers. The boundaries of wireless service areas are often larger than landline service areas. The geographic information is based on the rate center where the phone was purchased, rather than where the person lives. And many people move without changing their cellphone numbers.
Based on a comparison of geographic information provided with the sample to that derived from respondents’ self-reported zip codes, as many as 10% of cellphone respondents live in a different state and nearly 40% in a different county than was indicated by the sample. This issue is of particular concern for sampling cellphones within a geographic area. Although respondents who do not live in the area may be identified by a screener question, people who do live in the area – but have cellphones from elsewhere – are likely to be excluded from the survey. In addition, although estimates of the proportion cell only are now available for most states, it is still unclear how reliable these estimates are. Further, cellphone penetration rates are still not available for smaller geographic areas, such as counties. Because of this, it is difficult to accurately sample cellphone numbers and select cellphone numbers proportional to their size within these geographic areas.
In addition to the different procedures necessary for sampling cellphone numbers, there are also substantial challenges with interviewing people on their cellphones. Challenges that arise in conducing cellphone interviews are discussed in more detail in cellphone surveys.
For some surveys, it is important to ensure that there are enough members of a certain subgroup in the population so that more reliable estimates can be reported for that group. To do this, we oversample members of the subgroup by selecting more people from this group than would typically be done if everyone in the sample had an equal chance of being selected. Because the margin of sampling error is related to the size of the sample, increasing the sample size for a particular subgroup through the use of oversampling allows for estimates to be made with a smaller margin of error. A survey that includes an oversample weights the results so that members in the oversampled group are weighted to their actual proportion in the population; this allows for the overall survey results to represent both the national population and the oversampled subgroup.
For example, African Americans make up 13.6% of the total U.S. population, according to the U.S. Census. A survey with a sample size of 1,000 would only include approximately 136 African Americans. The margin of sampling error for African Americans then would be around 10.5 percentage points, resulting in estimates that could fall within a 21-point range, which is often too imprecise for many detailed analyses surveyors want to perform. In contrast, oversampling African Americans so that there are roughly 500 interviews completed with people in this group reduces the margin of sampling error to about 5.5 percentage points and improves the reliability of estimates that can be made. Unless a listed sample is available or people can be selected from prior surveys, oversampling a particular group usually involves incurring the additional costs associated with screening for eligible respondents.
An alternative to oversampling certain groups is to increase the overall sample size for the survey. This option is especially desirable if there are multiple groups of interest that would need to be oversampled. However, this approach often increases costs because the overall number of completed interviews needs to be increased substantially to improve the representation of the subgroup(s) of interest.
Many surveys conducted in the U.S. are not national in scope but instead are designed to represent residents of a single community, city, county, state or region. These surveys tend to have similar sampling procedures to many national surveys, but the people sampled are limited to the geographic area of interest. For example, residents of states can be sampled using the random digit dialing procedures described for our national surveys. However, there may be some error with sampling cellphones in a particular geographic area as discussed in the section on sampling cellphones.
Pew Research Center has conducted a number of state surveys, especially in the context of upcoming presidential primaries. And, we have conducted some special surveys of metropolitan area residents of Philadelphia, New York and Washington.
Primary poll publications
- Iowa, NH Voters Heavily Courted, Dems Have Edge in Personal Contact Dec. 7, 2007
- GOP Race Unsettled in Politically Diverse Early States Dec. 4, 2007
- Democratic Primary Preview: Iowa, New Hampshire, South Carolina Dec. 3, 2007
- Primary Preview: Surveys in Iowa, New Hampshire and South Carolina Dec. 8, 2003
- New Hampshire Voters Fault Candidates, Media And TV Ads Feb. 2, 1996
- Forbes Draws Even With Dole In New Hampshire Jan. 29, 1996
- New Hampshire and The Nation Jan. 22, 1992
Other regional publications
- One Year Later: New Yorkers More Troubled, Washingtonians More On Edge Sept. 5, 2002
- Screening Likely Voters: A Survey Experiment May 18, 2001
- Voters Not So Angry, Not So Interested June 15, 1998
- Trust And Citizen Engagement in Metropolitan Philadelphia: A Case Study April 18, 1997
Elites and other special populations
Representative surveys can be conducted with almost any population imaginable. It is common for surveyors to want to collect information from experts or elites in particular fields (such as policymakers, elected officials, scientists or news editors) and other special populations (such as special interest groups, people working in particular sectors, etc.). The principles of drawing a representative sample are the same whether the sample is of the general population or some other group. Decisions must be made about the size of the sample and the level of precision desired so that the survey can provide accurate estimates for the population of interest and any subgroups within the population that will be analyzed.
Some special challenges arise when sampling these populations. In particular, it may be difficult to find a sampling frame or list for the population of interest and this may influence how the population is defined. In addition, information may be available for only some methods of contacting potential respondents (e.g., email addresses but not phone numbers) and may vary for people within the sample. If most members in the population of interest have internet access and email addresses are available for contacting them, the web often provides a convenient and inexpensive way to survey experts or other special populations.
Pew Research Center occasionally conducts surveys of opinion leaders, especially those in public policy roles. The opinion of elites is often compared with that of the general public to better determine whether these groups have similar or different opinions. In addition, Pew Research Center has conducted several surveys designed to be representative of a special population, including surveying scientists, journalists, Muslim Americans, Howard Dean’s campaign supporters during the 2004 presidential primary campaign, political campaign consultants and constituent groups from a sample of federal agencies.
- Public and Scientists’ Views on Science and Society Jan. 29, 2015 (scientists)
- Public Sees U.S. Power Declining as Support for Global Engagement Slips Dec. 3, 2013 (foreign policy experts)
- A Portrait of Jewish Americans Oct. 1, 2013 (Jewish Americans)
- A Survey of LGBT Americans June 13, 2013 (LGBT adults)
- The Rise of Asian Americans June 19, 2012 (Asian Americans)
- War and Sacrifice in the Post-9/11 Era Oct. 5, 2011 (veterans)