Voter Joe Cieslo fills out an exit poll for the AP, CNN, Fox, NBC, CBS and ABC after voting at the Community Arts Center in Johnstown, Pennsylvania, on April 26, 2016. (Todd Berkey/The Tribune-Democrat via AP)

On Nov. 6, millions of Americans will hunker down in front of their TVs, boot up their computers or curl up with their mobile devices for a long evening of election-watching. Besides the results of hundreds of House, Senate and gubernatorial contests, these people will get plenty of analysis and commentary about what the voting patterns tell us about the state of the nation.

For more than a quarter-century, no matter which channel you were watching, much of that punditry ultimately derived from the same source: a nationwide survey of voters as they left their local polling places. That exit poll will occur this year too, sponsored by four major news networks and conducted by Edison Research. (In recent years, the polling-place interviews have been supplemented with pre-election phone interviews in states where a sizable share of the vote is cast via early, absentee or mail voting.)

But on election night 2018, there will be an additional source of data on who voted and why, developed by The Associated Press, Fox News and NORC at the University of Chicago and based on a very different methodology. That means that depending on where you go for election news, you may get a somewhat different portrait of this year’s electorate.

Those competing election-night efforts won’t be the last word on explaining the midterms. The Current Population Survey (conducted by the Census Bureau for the Bureau of Labor Statistics) will look at Americans’ self-reported registration and voting activity, and post-election surveys from various research groups will delve more deeply into what voters’ thoughts and motives were as they made their choices. Though such reports tend to get more attention from political scientists and other researchers than the news media (perhaps because they come out months or even years after the election), they may provide a fuller, more accurate account of the who, how, what and why of the 2018 midterms. 

Exit polls have a long history

By design, voting is private and anonymous, so the only way to find out who voted which way and why is to ask them. Nationwide exit polls have been around since 1972, when CBS conducted the first such survey. The other TV networks soon followed with their own exit polls, but rising expenses led them to pool their efforts beginning in 1989, when they formed Voter Research & Surveys (renamed Voter News Service, or VNS, in 1993 after AP joined; Fox came on board a few years later).

VNS conducted the exit poll and provided the results to the networks and AP throughout the 1990s. But after an error-strewn performance in the 2000 presidential election and a computer meltdown on election night 2002, VNS was dissolved and replaced by the National Election Pool, or NEP. The NEP members (CBS, NBC, ABC, CNN, Fox and AP) contracted with Edison Research (and, initially, Mitofsky International) to conduct the exit poll.

The core of the NEP exit poll, as the name suggests, involves surveying voters (via written questionnaires) as they leave their polling places. It’s a massive operation: In 2016, Edison interviewed about 85,000 people at nearly 1,000 locations across the country (along with about 16,000 phone interviews of early, absentee and mail voters). Voters at about 350 polling places answered a uniform national questionnaire; state-specific forms were fielded at the remaining 650 or so locations (between 15 and 50 per state).

Since voting is private and anonymous, the only way to find out how people voted and why is to ask them.

By late afternoon on Election Day, teams from NEP’s member networks will start poring over the exit poll data. As more and more actual votes are tallied, Edison updates or “re-weights” the exit poll data to align with the vote. These updated data are pushed out to the NEP members in several “waves” throughout the night. (Though the NEP member networks share the same exit poll data, each has its own “decision desk” to analyze the numbers and call races – combining them with other factors such as historical turnout patterns and partisan divisions.)

But the exit poll has been criticized for overestimating how big a share certain groups make up of the total electorate – particularly nonwhites, younger people and the college-educated. Since those groups tend to tilt liberal and Democratic, analysis that uses historical exit poll data to predict an upcoming election will overstate Democrats’ chances of winning. (Exit poll veteran Murray Edelman has suggested that nonresponse bias – specifically, difficulty in getting older voters to fill out the questionnaires – is at the root of many of the observed problems.)

Another issue is whether any poll can adequately capture the rising share of voters who don’t cast ballots in person at traditional voting places on Election Day. Absentee voting, early voting, and all-mail elections in a few states (Oregon, Washington and Colorado) have posed challenges to the traditional exit-poll model. In 2016, only 60% of voters – a new low – reported voting in person on Election Day, according to the Census Bureau; 21% said they had cast ballots by mail, and 19% said they had voted early in person. This year, Edison Research Executive Vice President Joe Lenski said, the exit poll is conducting in-person interviews outside early-voting centers in Nevada and Tennessee, two states where early voting is particularly significant. (In 2016, more than 60% of all votes in those two states were cast early.)

A new model

AP and Fox News left the NEP last year, and this past May announced that they, in conjunction with NORC, would field their own voter survey for the 2018 midterms. This new survey, which AP has dubbed VoteCast, will have three components: A phone and online survey of a random sample of registered voters drawn from state voter files; a larger survey of self-identified registered voters drawn from NORC’s probability-based panel; and an even larger nonprobability, opt-in survey conducted online. In total, AP expects to conduct more than 120,000 interviews beginning four days before Election Day and through the close of polls.

AP has been experimenting with alternatives to the traditional exit poll for several years. Last year, AP, Fox and NORC field-tested their new methodology in three statewide elections: regular elections for governor in Virginia and New Jersey, and a special U.S. Senate election in Alabama. They found that not only did the combined surveys predict the correct winner in all three cases, but that their estimates of Democratic and Republican vote shares were within 4 percentage points of the actual vote in all six cases (and within 2 points in four of those cases). For most demographic subgroups in the three states, the new survey produced similar results as the NEP exit polls. When there were discrepancies, they tended to be on dimensions of race, age and education – the areas where the NEP exit poll has received the most criticism. However, the experimental methodology also underestimated the share of black voters in Alabama’s election: 23%, versus the 29% reported in state voter files.

Fox will use the VoteCast data in its election night analysis, and The Washington Post has signed up to receive VoteCast results in several states. ABC, CBS, NBC and CNN are sticking with the traditional exit poll.

Looking ahead

Well after all the races are called and the “decision desks” put back into storage for another two years, researchers from such organizations as Pew Research Center, the Democracy Fund Voter Study Group and the Cooperative Congressional Election Study will be making more detailed explorations of who voted and why. These efforts will take various forms: traditional general-population surveys immediately after the elections; panel studies; and analyses of voter file data after states make their official registration and voting data available (though that likely won’t happen for several months). All will help fill in the picture of the 2018 midterms.

One particularly useful tool is the Census Bureau’s Voting and Registration Supplement. Since 1964, the bureau has collected voting and registration data via the November Current Population Survey (the same survey that yields the monthly unemployment report). Those data typically are published in late spring or early summer of the year following the election. While the CPS lacks the timeliness of an exit poll, researchers have long considered it a more authoritative source of the electorate’s demographic information (race and ethnicity, education, income, marital status and so on).

Competing election-night surveys won’t be the last word on explaining the midterms.

Pew Research Center contributes to the understanding of what happened on Election Day in several ways. In the weeks following the election, we conduct surveys focused on how Americans (whether they voted or not) view the post-election political environment, and how self-reported voters assess their voting experience. Over the longer term, we use our American Trends Panel to take a detailed look at the electorate – first by asking the panelists how (and if) they voted in the 2018 midterms, then by collecting the official voter files as they’re made available in the months following the election and matching them to our panelists to identify who actually did and did not vote.

This detailed matching process produces a set of “validated voters” about whom we know a lot. We did this kind of analysis of the 2016 electorate and plan to produce a similar report on 2018.

Till then, perhaps the best advice we can offer consumers of survey data about the election is to recognize that different tools may sometimes give different results, because they’re aiming at different goals. The NEP and VoteCast efforts have many experienced and expert minds behind them, but they also have to deliver data quickly on Election Night to serve the demands of the news cycle. That alone creates significant design and execution challenges. The Census Bureau’s CPS has high standards for survey sampling and design rigor, but its data isn’t available for several months. The CPS also relies on self-reported turnout and doesn’t ask people how they voted, their political affiliation or what issues influenced their vote. Pew Research Center’s voter-matching study takes even longer to produce, but it has the advantage of validated voting information and can overlay a wealth of demographic and attitudinal data atop voting behavior.

Drew DeSilver  is a senior writer at Pew Research Center.