The analysis presented here suggests that modeling the electorate is likely to continue to vex pollsters, especially if no official record of past voting is available as an input to the models. As if to affirm this somewhat pessimistic conclusion, polls have failed to accurately predict winning candidates in several recent elections, including the 2015 race for governor in Kentucky, several 2014 U.S. races for Senate and governor, the 2015 British general election, the 2015 Scottish referendum on independence and the 2015 referendum in Greece on acceptance of the European Union’s terms for a bailout. In the 2012 U.S. presidential election, many surveys at both the state and national levels underestimated the share of the vote that Barack Obama would receive. Errors in modeling the likely electorate are suspected of contributing to many of these polling failures.

So, can likely voter models be improved? For the 2014 elections, this analysis found that the Perry-Gallup method improved the U.S. House forecast relative to relying on the preferences of all registered voters, or even the subset who simply said they intended to vote in the election. But it did not perform as well as other approaches that incorporated more variables or more-complex models. A new modeling approach that uses decision trees and machine learning with the same set of questions improved on the estimates, but may be difficult for most polling organizations to implement and describe to their audiences. Moreover, it remains to be seen how well the regression methods evaluated here (including those using machine learning) will perform when they are applied to future elections.

Consistent with previously published research (e.g., Rogers and Aida 2014), adding voter file records of past vote produced the greatest improvement in the forecasts. But this information is often difficult to incorporate with random digit dial phone surveys since it requires gathering respondent names and addresses to facilitate an accurate match with the voter file; many RDD survey respondents are unwilling to provide this type of information during a telephone interview.

One solution is to use voter files as a sampling frame. This is becoming more common as the quality of state voter files and the commercial databases built upon them has improved. These commercial files often include telephone numbers and additional political, demographic and lifestyle data about households. But they may have significant biases, with highly mobile and lower-income individuals underrepresented (Jackman and Spahn 2015a, 2015b).

The ultimate goal of likely voter methods is to create an accurate model of the electorate, not necessarily to identify whether each individual in a survey will or will not vote. All of the methods examined here yield a predicted electorate that closely matches the actual 2014 electorate with respect to gender, age and race – three demographic variables that are correlated with vote choice. Some pollsters adjust their models by making assumptions about the turnout of groups within the population or by attempting to match the characteristics of previous electorates. But if those assumptions are incorrect, serious forecasting errors can occur, as some GOP pollsters discovered when turnout among African Americans in 2012 exceeded their predictions.

Elections remain unique among the subjects that polls engage, in part because they provide a definitive outcome with which to judge the accuracy of the polls. Considering the precision that is required and the challenges inherent in election polling, it is perhaps notable that the craft has been as successful as it has. National polls in U.S. presidential elections in the past several cycles have been generally accurate in forecasting the partisan division of the vote, and state-level polls in 2012 were accurate enough to allow polling aggregators to forecast the outcome of the vote in all 50 states. But off-year elections in the U.S., especially 2014, were less kind to pollsters, and more recent national and international elections have raised further questions about whether polling is still able to accurately identify the electorate and its intentions.

The models examined here will need to be tested in future elections. Having panel data with information to validate turnout provides a strong basis for inference, but this is, in effect, a case study: one analysis in one election. This research focused on a particular midterm election, one that happened to have an unusually low turnout. Applying these models to other elections will reveal how well they can be generalized.