Pre-election polling is tricky work. A number of unknown factors can stand in the way of accurate predictions — problems with identifying registered and likely voters, uncertainties about voter turnout, and last-minute shifts in candidate preference. But estimating voter preferences in biracial elections has been especially difficult. Pre-election surveys, even those taken just days before voters go to the polls, often substantially underestimate support for white candidates in races where the other candidate is African-American.
This phenomenon, which some pollsters call “racial slippage,” was a factor in at least four highly-competitive biracial contests during the 1980s and 1990. In three of the four elections, independent media polls consistently over-predicted the margin of victory for the black candidates. And in the Helms-Gantt 1990 Senate race, the polls under-predicted the margin of victory for the white candidate. The main cause of these errors seems to have been the difficulty of measuring support for the white candidates. Two separate polls of likely voters taken in the final week of the 1989 campaign for governor in Virginia, for example, showed Democrat L. Douglas Wilder leading by 9 to 11 percentage points. Days later, Wilder won the election by less than 7,000 votes — a margin of four-tenths of a percentage point.
Pollsters acknowledge that estimating voter preferences in biracial elections is especially difficult, in part because some white respondents may be reluctant to say they will not support a black candidate.1
In this paper, we present another piece of the puzzle, based on insights that come from a unique survey experiment conducted by the Pew Research Center. Those who are reluctant to participate in telephone surveys — and therefore most likely to be missed in quick-turnaround polls that do not include callbacks and refusal conversions — are noticeably less sympathetic toward African-Americans. This suggests that pre-election surveys, which are bound by the time constraints of the election cycle, may underestimate support for white candidates in biracial elections in part because those who are less likely to participate in the polls are also more likely to oppose the black candidate.
Non-Response Bias in Survey Research
Survey non-response is widely recognized as a potential source of error that can reduce the accuracy of all types of polls. Most non-response in telephone surveys is attributable to two factors. First, some of the people (or households) in a sample are never reached, most likely because they are not at home or do not answer the telephone during the period when a poll is being conducted. Second, other people are reached but refuse to participate in the poll. All surveys are hampered by non-response. Even the National Elections Studies and the General Social Survey — academic surveys that are based on in-person interviewing — tend to have non-response rates of 25 to 35 percent, and non-response in telephone surveys can, by several estimates, be at least 10 percentage points higher (Brehm 1993, pp. 16-17).2
There are several ways to lower the non-response rates in surveys. Polls can be conducted over a longer time period, which provides more opportunities to place calls to hard-to-reach people. In addition, survey organizations can attempt refusal conversions by calling back people who initially declined to participate in a poll and trying to gain their cooperation. Both of these measures not only increase the cost of conducting a survey, but are especially difficult for polls conducted over the course of only a few days, as many pre-election polls are.3
Non-response can bias survey estimates if those who do not participate in a survey hold substantially different attitudes than those who do participate.4 Since those who are truly “non-respondents” are never interviewed, it is difficult to measure the extent to which the opinions of respondents and non-respondents actually differ. It is possible, however, to compare those who readily agreed to participate in a poll with those who at first refused — people who are most likely to be left out in surveys that do not have either the time or resources required to attempt refusal conversions. That is the approach taken here.
The following analysis is based on polling conducted in the Summer of 1997 by the Pew Research Center as part of a comparison of various survey methodologies. One component of the experiment was an extended refusal-conversion effort. All interview breakoffs and refusals were contacted again — and in many cases twice, if necessary — to attempt to complete the interview. In addition, many of those who refused to be interviewed after two calls were sent a conversion letter by priority mail before they were called a third time.
The results presented here offer new insights into a challenge that confronts all survey research — especially quickly-conducted pre-election polls that may not have either the time or financial resources required to gain the cooperation of those who at first refuse to participate in telephone surveys. We compare the attitudes of two groups of respondents: “amenable respondents” who agreed to participate in the poll the first time they were contacted, and “reluctant respondents” who initially refused to participate and cooperated only after one or more callbacks.5 Because the largest differences between the two groups emerge on racial attitudes, the following analysis is restricted to white respondents only.
Comparing Amenable and Reluctant Respondents
In most respects, amenable respondents and reluctant respondents are remarkably similar to one another.6 The group of reluctant respondents does not contain disproportionately more or less men, minorities, or younger people (see Table 1). There were also no notable differences in level of education between the two groups, and responses to three knowledge questions do not offer consistent evidence that reluctant respondents are significantly less informed about current events. A slightly greater number of amenable respondents knew that former Senator Bob Dole had recently loaned Newt Gingrich money to pay off the House Speaker’s ethics fines (39% among amenable respondents, compared to 32% among reluctant respondents). But two other knowledge questions — concerning majority control of the House of Representatives and identification of Microsoft CEO Bill Gates — did not reveal any statistically significant differences between the two groups.
Amenable and reluctant respondents did differ on one demographic measure: income. Nearly one-third (31%) of the reluctant respondents had family incomes of $50,000 or more, compared to 24 percent of amenable respondents.
Reluctant respondents do not appear to be more suspicious than amenable respondents in how they view other people. There are no significant differences between the proportion in each group who agree that people can be trusted, are likely to take advantage of others, or are likely to be helpful. Nor do amenable and reluctant respondents differ significantly in their views toward public opinion polls. Roughly two-thirds in each group said that polls work for — rather than against — the “best interests of the general public” (66% among amenable respondents compared to 65% among reluctant respondents), although as many in each group (65% and 68%, respectively) doubted that a random sample of 1,500 people can “accurately reflect the views” of the American public.
Critics of media polls have argued that surveys overstate support for Democratic candidates and underestimate conservative opinions — possibly because conservatives are more likely to refuse to participate in polls.7 But a number of measures give no indication that reluctant respondents are significantly more conservative than amenable respondents. Both groups of respondents include comparable percentages of Democrats and Republicans, and of self-described liberals and conservatives. Questions on a range of political values also revealed no differences between amenable and reluctant respondents.
Sharp Differences on Racial Attitudes
The two groups hold strikingly different views, however, on several race-related questions, with reluctant respondents significantly less sympathetic than amenable respondents toward African-Americans. Three of four questions measuring racial attitudes revealed statistically significant differences of nine percentage points or more between the two groups. Just 15% of reluctant respondents said they hold a “very favorable” opinion of blacks, for example, compared to 24% of amenable respondents. Similarly, fully 70% of reluctant respondents agreed with the statement that blacks who “can’t get ahead in this country are mostly responsible for their own condition,” while just 21% agreed that racial discrimination is the “main reason why many black people can’t get ahead”. This compares with a much narrower 54%33% margin among amenable respondents.
The differences between amenable and reluctant respondents are equally large on a proposed national apology for slavery, an idea floated by President Clinton in the summer of 1997. Fully 68% of reluctant respondents said they opposed a national apology, compared to just 53% of amenable respondents.8
Race-of-interviewer effects seem to explain some — but not all — of the differences between amenable and reluctant respondents. Most of those who initially agreed to participate in the survey were called and interviewed by African-American interviewers (69%). In contrast, most of the reluctant respondents (66%), who were called back one or more times for a refusal conversion attempt, were called and ultimately interviewed by a non-black interviewer. Clearly, the way some white respondents answer questions about racial issues may vary, depending on the race of the person conducting the interview. Even in telephone surveys, white respondents have been found to be much less likely to reveal racially-biased attitudes when being interviewed by a black person (Cotter, Cohen, and Coulter 1982; Hatchett and Schuman 1975-76). Consequently, the differences in racial attitudes between amenable and reluctant respondents might reasonably be explained by the differences in the race of the interviewers between the two groups.
There are substantial race-of-interviewer effects on questions concerning racial issues, and these effects can be seen among both amenable and reluctant respondents. Amenable respondents who were interviewed by a non-black interviewer were more likely than those interviewed by a black interviewer to blame blacks for their own condition, and less likely to favor a national apology for slavery. Similarly, reluctant respondents who were interviewed by a non-black interviewer expressed less favorable views of blacks and were more strongly opposed to a slavery apology than those interviewed by a black interviewer.
Nonetheless, when comparing only those respondents who were interviewed by a non-black interviewer, thus controlling for any interviewer effects, reluctant respondents remain consistently less sympathetic toward blacks. The largest gap can be seen on the issue of a national apology for slavery. Reluctant respondents who were interviewed by a non-black interviewer opposed an apology by a margin of 74% to 21%, while amenable respondents interviewed by a non-black opposed it by a much more narrow 59% to 33% margin. Statistically significant gaps are also apparent on two other race measures. On favorability toward blacks, 12% of reluctant respondents characterize their opinion as “very favorable” compared to 23% of amenable respondents. Fully 72% of reluctant respondents say blacks are responsible for their own condition, compared to 61% of amenable respondents.
Remarkably, on this same measure, there is a significant difference in opinion even between respondents who were interviewed by black interviewers. Two-thirds (66%) of the reluctant respondents blame blacks for their own circumstances compared to 51% of amenable respondents.
In fact, the differences between amenable and reluctant respondents on race questions are statistically significant even when a number of attitudinal and methodological factors are taken into account. The evidence for this is in Table 2, which presents the results of two multiple regression equations. Respondents’ overall opinion toward blacks is the dependent variable in one equation, and their views concerning why “many black people can’t get ahead” is the dependent variable in the other.
Both equations include variables controlling for a range of differences across respondents. As noted above, the survey data used in this analysis were collected as part of a broader comparison of methodologies. The “amenable respondents” analyzed here come from the standard, five-day survey which used a systematic but non-random selection procedure within households, while roughly 40% of the “reluctant respondents” analyzed here come from the more rigorous survey, which used a random-selection procedure. Therefore, the estimations include a dichotomous variable controlling for whether respondents were polled as part of the standard or rigorous survey. In addition, another variable is included to account for any race-of-interviewer effects. The estimations include several other controls, including variables for sex, age, education, income, region (a dummy variable for respondents from Southern states), and a measure of political ideology. Finally, a dummy variable is included to estimate the differences between amenable and reluctant respondents.
Though the regression models have little predictive power, they provide further evidence for the main conclusions drawn here: as a group, reluctant respondents are significantly less sympathetic than amenable respondents toward blacks, even when political ideology, level of education, race of interviewer, and other factors are taken into account. On the question of why many blacks can’t get ahead, being a reluctant respondent is strongly and significantly (p < .01) related to seeing blacks themselves, rather than racial discrimination, as responsible for their current situation. This pattern is evident even when controlling for a number of other statistically significant predictors, including education, region, ideology, and race of interviewer. Similarly, reluctant respondents are on average less likely to hold a favorable opinion of blacks, although the results based on this question are somewhat weaker. In this estimation fewer variables are significantly related to favorability toward blacks, but the coefficient for those who initially refused to participate in the poll remains statistically significant (p < .01).
The sharp differences between amenable and reluctant respondents on race-related questions may offer new insights into the difficulties involved in pre-election polling in biracial elections. In a number of competitive biracial contests in recent decades, surveys conducted even a few days before voters went to the polls have substantially underestimated support for the white candidate. The results presented here suggest that this phenomenon, sometimes called “racial slippage”, may be due in part to the inability of quickly-conducted pre-election polls to reach reluctant respondents — people who are less likely to participate in polls and, just as important, much less sympathetic toward African-Americans. Significant differences between amenable and reluctant respondents are evident on three of four questions involving race relations, even when race-of-interviewer effects and a number of other attitudinal factors are taken into account.
Nonetheless, the evidence presented here is only suggestive. The surveys used for this analysis were not themselves pre-election polls — rather, they were conducted during the summer of 1997 as part of a broader comparison of survey methodologies. Consequently, there is no direct evidence that the differences between amenable and reluctant respondents on racial issues would translate into similar differences in the voting behavior of these two groups in biracial elections. At the same time, the significant gaps between amenable and reluctant respondents on race-related questions are consistent with the pattern of underestimating support for white candidates in biracial contests. This suggests non-response may be an especially important concern for pre-election polling in these biracial elections.
- Brehm, John. 1993. The Phantom Respondents: Opinion Surveys and Political Representation. Ann Arbor: University of Michigan Press.
- Cotter, Patrick, Jeffrey Cohen, and Philip B. Coulter. 1982. Race-of-Interviewer Effects in Telephone Interviews. Public Opinion Quarterly 46:278-284.
- Crespi, Irving. 1988. Pre-Election Polling: Sources of Accuracy and Error. New York: Russell Sage Foundation.