U.S. Survey Research
One of the most prominent applications of survey research is election polling. In election years, much of the polling by Pew Research Center focuses on people’s engagement in the election, opinions about the candidates, views of the campaign and voter preferences. Even in the so-called “off years,” many of our polls include questions about party identification, past voting behavior or voter reactions to events.
Pre-election polling provides one of the few times when pollsters can assess the validity of their work by measuring how well their polls match election outcomes. But, polls designed to measure voter intentions serve up some special challenges. How do you identify which respondents will actually vote? Are respondents honest when they tell us for whom they intend to vote? How will undecided voters make their final decisions?
Although election polls attract a great deal of attention for their ability to predict the outcome of elections, their most important function is to help journalists and citizens understand the meaning of the campaign and the election. Polls help to explain, among other things, what issues are important, how candidate qualities may affect voters’ decisions, and how much support there is for particular policy changes.
Identifying likely voters
One of the most difficult aspects of conducting election polls is determining whether a respondent will actually vote in the election. More respondents say they intend to vote than actually will cast a ballot. As a consequence, pollsters do not rely solely upon a respondent’s stated intention when classifying a person as likely to vote or not. Most pollsters use a combination of questions that measure intention to vote, interest in the campaign and past voting behavior. Different pollsters use different sets of questions to help identify likely voters.
Pew Research Center’s likely voter questions from 2012 can be found in our election questions in red type. We use nine questions to assign each respondent a score on the likely voter scale in our final pre-election poll. Earlier in the campaign, we use a somewhat shorter version of the scale to identify likely voters. For more extensive detail about likely voters in the 2012 presidential election and the scale used to determine who was most likely to vote see “Understanding Likely Voters.”
Pew Research Center’s likely voter scale is very similar to the method developed by election polling pioneer Paul Perry at the Gallup Organization and has been used successfully in many elections. To help assess which questions or combination of questions from the scale were most accurate in predicting voting, we conducted an experiment in a closely contested mayoral race in Philadelphia in 1999. The results from the experiment are summarized in “Screening Likely Voters: A Survey Experiment” and a more extensive discussion of how the likely voter scale is created and used is available in “Screening for Likely Voters in Pre-Election Polls: A Voter Validation Experiment.”
These publications contain the results of the Pew Research Center’s final presidential election forecast polls:
- Obama Gains Edge in Campaign’s Final Days Nov. 4, 2012
- Obama Leads McCain 52% to 46% in Campaign’s Final Days Nov. 2, 2008
- Slight Bush Margin in Final Days of Campaign Oct. 31, 2004
- Popular Vote A Tossup: Bush 49%, Gore 47%, Nader 4% Nov. 6, 2000
- Slight Bush Margin Holding with Days To Go Nov. 5, 2000
- Final Pew Center Survey – Clinton 52%, Dole 38%, Perot 9% Nov. 3, 1996
- Voters Still Paying More Attention to Perot Oct. 30, 1992
Determining voter preference
Determining voter preference among the candidates running for office would appear to be a relatively simple task: just ask them who they are going to vote for on Election Day. In fact, differences in how this question is asked and where it is placed in the questionnaire can affect the results. While most voters have usually made up their minds and are not likely to be affected by how the question is posed, many people have given less thought to the campaign or are genuinely ambivalent about the choices. For these voters, certain features of the question can make a difference.
The questions in Pew Research Center’s election questions in blue type were used by the center in its final poll of the 2012 presidential election to measure voter preference. The particular features of these questions reflect several choices:
- Both the presidential and vice presidential candidates are included in the questions.
- The party affiliation of each ticket is mentioned explicitly.
- In states where Gary Johnson and/or Jill Stein were on the ballot, they and their running mates are included in the choices read to respondents. (Other minor party candidates were not named in the questionnaire.)
- The order of presentation of the Democratic and Republican tickets is randomized so that some respondents hear the Democratic ticket first and others hear the Republican ticket first. The Johnson and Stein tickets always follow the major party candidates. In states where both are on the ballot; they are also randomized.
These features are an effort to make the presentation of the options as similar as possible to what voters would actually experience when casting their ballots. Because Johnson and Stein were not on the ballot in all states, respondents were asked whether they favored these candidates only in the states where they were listed as an option. And the randomization of the order of the two major party tickets reflects the fact that the order of the ballot may vary in different locations. However, this effort is not perfect, since not all states mention the party affiliation of the ticket and not all states feature a random selection of the ballot order. In addition, there are often other candidates on the ballot. Pew Research Center and most other national polling organizations make a judgment as to which third-party tickets should be included in their survey questions.
Pollsters face an even more difficult challenge in primary elections, where the number of candidates is often very large and many include names that may be unfamiliar to voters. Long lists of candidates can create difficulties in a telephone survey, and the effect of a candidate’s position on the ballot can be more consequential than in a general election contest where fewer candidates are listed. In general, our practice in primary elections is to read all of the candidates that remain in the race, randomizing the order of the names.
Two other choices in the Pew Research Center’s election questions are important to note:
- The trial heat questions are asked very early in the questionnaire, prior to any other substantive questions about politics other than voter registration, political engagement and past voting history. This is done to avoid the possibility of affecting the voter’s choice by raising considerations such as issues, candidate personalities or other factors. While all of these may ultimately be relevant to the voter’s choice, there is no guarantee that the things we mention will be the ones most important when a voter finally makes a choice among the candidates. Thus, it is important to make the choice as “clean” as possible.
- Some voters will not express an initial choice so they are asked a follow-up “leaner” question (in the center’s election questions) in an effort to obtain a preference. People who lean toward one of the candidates are typically included in the tabulation of voter preference.
Pew Research Center also reports the size of the so-called “swing vote” – defined as voters who are either undecided, only leaning to a candidate, or who say they might change their mind before Election Day. In addition to discussing the swing vote in many of our election reports, “Swing Voters Slow to Decide, Still Cross-Pressured” describes a more extensive analysis, conducted in the late stages of the 2004 campaign, of the size of the swing vote and how swing voters who were identified in earlier surveys responded when re-contacted in mid-October.
One final issue in determining voter preference is the question of whether respondents will always answer honestly when asked for their choice in an election. For the most part, this has not proven to be a problem, as most election polling has been very accurate. However, a small percentage of people – typically less than 5% – will refuse to answer the vote choice question.
A pattern of polling errors during the 1980s and 1990s in elections involving African-American candidates raised the question of whether some people are reluctant to say that they are voting against a black candidate. Alternatively, there is the possibility that members of some demographic groups that are more likely to be racially conservative are also disproportionately likely to refuse to participate in surveys (see “Perils of Polling in Election ’08” for more information). If so, this could potentially produce a bias in the poll’s estimate of the outcome of the election. Regardless of what caused it, polls in many of these elections tended to understate the level of support for the white candidate.
This phenomenon is sometimes called the “Bradley effect” because it was first observed in the 1982 California gubernatorial election between Tom Bradley, a black Democrat, and George Deukmejian, a white Republican. Pew Research Center has examined the question of whether polling in elections continues to understate support for white candidates when they are running against black candidates. While the pattern was clear in the 1980s and earlier in the 1990s, more recent elections in 2006 showed little sign of the so-called “Bradley effect” (see “Can You Trust What Polls Say about Obama’s Electoral Prospects?” for more information).
Concerns about the Bradley effect had obvious relevance for the 2008 presidential election, both in the Democratic primaries and in the general election contest between Barack Obama and John McCain. There was, however, no evidence of systematic polling errors consistent with the Bradley effect in either the primaries or the general election (see “Perils of Polling in Election ’08” for more information).
Gauging the accuracy of election polls
Although polls have occasionally failed to predict who will win an election, polling’s track record is actually very good. The National Council on Public Polls has conducted analyses of presidential election polling accuracy from 1936 to the present and provides reports summarizing these results on its website. You can see Pew Research Center’s record on predicting the popular vote in presidential elections here.
The good track record of final pre-election polls does not mean that all pre-election polls are reliable. Polls conducted early in an election season should be taken as snapshots in time, and obviously cannot capture the impact of the campaign and events to come. This publication examines presidential election polls conducted well in advance of the election and attempts to gauge how predictive they are:
- How Reliable Are the Early Presidential Polls? Feb. 14, 2007
These publications provide a few tips to help in reading polls and deciding how much weight to give them:
- The Bounce Effect Sept. 11, 2008
- Another Bush Lead Vanishes? July 14, 2000
- So Who’s Ahead? April 14, 2000
National polls sometimes attempt to gauge how voters will vote in elections for the U.S. House of Representatives. Of course, there is no national election for the House; instead there are elections in each of the 435 congressional districts. But pollsters have found that the so-called “generic ballot test,” which asks whether respondents intend to vote for the Republican or the Democratic candidate in their local race for the House, can provide an accurate estimate of the vote on which projections about party gains and losses in seats can be based. The following publications illustrate the use of the generic ballot test and some of the issues involved in using it:
- Why The Generic Ballot Test? Oct. 1, 2002
- The Generic House Ballot and Committed Views on Clinton Oct. 7, 1998
- Voters Not So Angry, Not So Interested June 15, 1998
- Generic Congressional Measures Less Accurate In Presidential Years Sept. 18, 1996
Additional topics on election polling
A number of Pew Research Center publications provide additional examples of how election polls are conducted and used. The following publications illustrate in-depth pre-election polling where the goal is to analyze and classify voters, and to track changes in fundamental political attitudes and values over time.
In-depth voter analysis
- Beyond Red vs. Blue: The Political Typology (2014) June 26, 2014
- Political Polarization in the American Public June 12, 2014
- Beyond Red vs. Blue: The Political Typology (2011) May 4, 2011
- Beyond Red vs. Blue: The Political Typology (2005) May 10, 2005
- The 2004 Political Landscape Nov. 5, 2003
- Retro-Politics Nov. 11, 1999
- The People, the Press & Politics Sept. 21, 1994
- Electability: Bush, Clinton & Congress April 3, 1992
- Campaign ’92 Survey III Feb. 28, 1992
Presidential election primary state polling
Primary elections present special challenges. These reports are based on polls in early primary states or describe the issues involved in polling these contests.
- Iowa, NH Voters Heavily Courted, Dems Have Edge in Personal Contact Dec. 7, 2007
- GOP Race Unsettled in Politically Diverse Early States Dec. 4, 2007
- Democratic Primary Preview: Iowa, New Hampshire, South Carolina Dec. 3, 2007
- A Good Day for the Pollsters Jan. 29, 2004
- Poll Focuses on Democratic Primary Voters Dec. 9, 2003
- Primary Preview: Surveys in Iowa, New Hampshire and South Carolina Dec. 8, 2003
- Bradley Deficit Daunting; Bush Closer Jan. 28, 2000
- Independents Drive New Hampshire Poll Shifts Dec. 10, 1999
- New Hampshire Voters Fault Candidates, Media And TV Ads Feb. 2, 1996
- Forbes Draws Even With Dole In New Hampshire Jan. 29, 1996
- Volatility Still to Come in New Hampshire Feb. 13, 1996
- The Iowa Echo, And Playing For Second In New Hampshire Jan. 3, 1996
Voter reactions following presidential elections
Pew Research Center has regularly polled voters after presidential elections to gather reactions to the campaign and the outcome of the election. Many of these involve re-interviews with respondents from pre-election polls. These publications describe the findings from these surveys.
- Low Marks for the 2012 Election Nov. 15, 2012
- High Marks for the Campaign, a High Bar for Obama Nov. 13, 2008
- Voters Liked Campaign 2004, But Too Much ‘Mud-Slinging’ Nov. 11, 2004
- Some Final Observations on Voter Opinions Dec. 21, 2000
- Campaign 2000 Highly Rated Nov. 16, 2000
- Voters Side with Bush for Now Nov. 14, 2000
- Campaign ’96 Gets Lower Grades from Voters Nov. 15, 1996
- Voters Say ‘Thumbs Up’ To Campaign, Process & Coverage Nov. 15, 1992