5 key things to know about the margin of error in election polls
In presidential elections, even the smallest changes in horserace poll results seem to become imbued with deep meaning. But they are often overstated. Pollsters disclose a margin of error so that consumers can have an understanding of how much precision they can reasonably expect. But coolheaded reporting on polls is harder than it looks, because some of the betterknown statistical rules of thumb that a smart consumer might think apply are more nuanced than they seem. In other words, as is so often true in life, it’s complicated.
Here are some tips on how to think about a poll’s margin of error and what it means for the different kinds of things we often try to learn from survey data.
1What is the margin of error anyway?
Because surveys only talk to a sample of the population, we know that the result probably won’t exactly match the “true” result that we would get if we interviewed everyone in the population. The margin of sampling error describes how close we can reasonably expect a survey result to fall relative to the true population value. A margin of error of plus or minus 3 percentage points at the 95% confidence level means that if we fielded the same survey 100 times, we would expect the result to be within 3 percentage points of the true population value 95 of those times.
The margin of error that pollsters customarily report describes the amount of variability we can expect around an individual candidate’s level of support. For example, in the accompanying graphic, a hypothetical Poll A shows the Republican candidate with 48% support. A plus or minus 3 percentage point margin of error would mean that 48% Republican support is within the range of what we would expect if the true level of support in the full population lies somewhere 3 points in either direction – i.e., between 45% and 51%.
2How do I know if a candidate’s lead is ‘outside the margin of error’?
News reports about polling will often say that a candidate’s lead is “outside the margin of error” to indicate that a candidate’s lead is greater than what we would expect from sampling error, or that a race is “a statistical tie” if it’s too close to call. It is not enough for one candidate to be ahead by more than the margin of error that is reported for individual candidates (i.e., ahead by more than 3 points, in our example). To determine whether or not the race is too close to call, we need to calculate a new margin of error for the difference between the two candidates’ levels of support. The size of this margin is generally about twice that of the margin for an individual candidate. The larger margin of error is due to the fact that if the Republican share is too high by chance, it follows that the Democratic share is likely too low, and vice versa.
For Poll A, the 3percentagepoint margin of error for each candidate individually becomes approximately a 6point margin of error for the difference between the two. This means that although we have observed a 5point lead for the Republican, we could reasonably expect their true position relative to the Democrat to lie somewhere between –1 and +11 percentage points. The Republican would need to be ahead by 6 percentage points or more for us to be confident that the lead is not simply the result of sampling error.
In Poll B, which also has a 3point margin of error for each individual candidate and a 6point margin for the difference, the Republican lead of 8 percentage points is large enough that it is unlikely to be due to sampling error alone.
3How do I know if there has been a change in the race?
With new polling numbers coming out daily, it is common to see media reports that describe a candidate’s lead as growing or shrinking from poll to poll. But how can we distinguish real change from statistical noise? As with the difference between two candidates, the margin of error for the difference between two polls may be larger than you think.
In the example in our graphic, the Republican candidate moves from a lead of 5 percentage points in Poll A to a lead of 8 points in Poll B, for a net change of +3 percentage points. But taking into account sampling variability, the margin of error for that 3point shift is plus or minus 8 percentage points. In other words, the shift that we have observed is statistically consistent with anything from a 5point decline to an 11point increase in the Republican’s position relative to the Democrat. This is not to say such large shifts are likely to have actually occurred (or that no change has occurred), but rather that we cannot reliably distinguish real change from noise based on just these two surveys. The level of observed change from one poll to the next would need to be quite large in order for us to say with confidence that a change in the horserace margin is due to more than sampling variability.
Even when we do see large swings in support from one poll to the next, one should exercise caution in accepting them at face value. From Jan. 1, 2012, through the election in November, Huffpost Pollster listed 590 national polls on the presidential contest between Barack Obama and Mitt Romney. Using the traditional 95% threshold, we would expect 5% (about 30) of those polls to produce estimates that differ from the true population value by more than the margin of error. Some of these might be quite far from the truth.
Yet often these outlier polls end up receiving a great deal of attention because they imply a big change in the state of the race and tell a dramatic story. When confronted with a particularly surprising or dramatic result, it’s always best to be patient and see if it is replicated in subsequent surveys. A result that is inconsistent with other polling is not necessarily wrong, but real changes in the state of a campaign should show up in other surveys as well.
The amount of precision that can be expected for comparisons between two polls will depend on the details of the specific polls being compared. In practice, almost any two polls on their own will prove insufficient for reliably measuring a change in the horse race. But a series of polls showing a gradual increase in a candidate’s lead can often be taken as evidence for a real trend, even if the difference between individual surveys is within the margin of error. As a general rule, looking at trends and patterns that emerge from a number of different polls can provide more confidence than looking at only one or two.
4How does the margin of error apply to subgroups?
Generally, the reported margin of error for a poll applies to estimates that use the whole sample (e.g., all adults, all registered voters or all likely voters who were surveyed). But polls often report on subgroups, such as young people, white men or Hispanics. Because survey estimates on subgroups of the population have fewer cases, their margins of error are larger – in some cases much larger.
A simple random sample of 1,067 cases has a margin of error of plus or minus 3 percentage points for estimates of overall support for individual candidates. For a subgroup such as Hispanics, who make up about 15% of the U.S. adult population, the sample size would be about 160 cases if represented proportionately. This would mean a margin of error of plus or minus 8 percentage points for individual candidates and a margin of error of plus or minus 16 percentage points for the difference between two candidates. In practice, some demographic subgroups such as minorities and young people are less likely to respond to surveys and need to be “weighted up,” meaning that estimates for these groups often rely on even smaller sample sizes. Some polling organizations, including Pew Research Center, report margins of error for subgroups or make them available upon request.
5What determines the amount of error in survey estimates?
Many poll watchers know that the margin of error for a survey is driven primarily by the sample size. But there are other factors that also affect the variability of estimates. For public opinion polls, a particularly important contributor is weighting. Without adjustment, polls tend to overrepresent people who are easier to reach and underrepresent those types of people who are harder to interview. In order to make their results more representative pollsters weight their data so that it matches the population – usually based on a number of demographic measures. Weighting is a crucial step for avoiding biased results, but it also has the effect of making the margin of error larger. Statisticians call this increase in variability the design effect.
It is important that pollsters take the design effect into account when they report the margin of error for a survey. If they do not, they are claiming more precision than their survey actually warrants. Members of the American Association for Public Opinion Research’s Transparency Initiative (including Pew Research Center) are required to disclose how their weighting was performed and whether or not the reported margin of error accounts for the design effect.
It is also important to bear in mind that the sampling variability described by the margin of error is only one of many possible sources of error that can affect survey estimates. Different survey firms use different procedures or question wording that can affect the results. Certain kinds of respondents may be less likely to be sampled or respond to some surveys (for instance, people without internet access cannot take online surveys). Respondents might not be candid about controversial opinions when talking to an interviewer on the phone, or might answer in ways that present themselves in a favorable light (such as claiming to be registered to vote when they are not).
For election surveys in particular, estimates that look at “likely voters” rely on models and predictions about who will turn out to vote that may also introduce error. Unlike sampling error, which can be calculated, these other sorts of error are much more difficult to quantify and are rarely reported. But they are present nonetheless, and polling consumers should keep them in mind when interpreting survey results.
Category: 5 Facts
Topics: 2016 Election, Elections and Campaigns, Research Methods, Telephone Survey Methods, Web Survey Methods

Andrew Mercer is a senior research methodologist at Pew Research Center.
Anonymous • 3 weeks ago
The margin of error seems to apply only to sampling error. How do you calculate the error associated with nonresponse? One would think it would be substantially larger than the margin of sampling error, given that (a) response rates are in the single digits combined with (b) the theoretical possibility that nonresponse is systematic. Are you required by organizations such as AAPOR to report the nonresponse margin of error as well?
Anonymous • 3 weeks ago
Mr. Mercer, Thank you for your details on how the pollsters calculate their findings. In your opinion what as a reader/consumer of information should I believe is the validity of a poll that states no margin of error when announcing their results?
Andrew Mercer • 3 weeks ago
One should be cautious when no margin of error is reported for a poll. If the results are being reported by a third party (such as in an oped or on a blog), you may be able to find the margin of error by going to the website of the organization that originally conducted or commissioned the survey. You may also be able to find it listed on one of the websites that aggregate polls.
Charles Montgomery • 3 weeks ago
1). I’m confused by this part: “But taking into account sampling variability, the margin of error for that 3point shift is plus or minus 8 percentage points.”
How did you calculate this 8 percentage point MOE? Could you give another example.
2). I also noticed an error on the axis labels for the chart on the left. The tick marks include 45 twice.
Bruce Drake • 3 weeks ago
Thanks for the headsup to us. It’s being fixed
Andrew Mercer • 3 weeks ago
The answer to your first question is a bit technical, but if two surveys have the same margin of error, the margin of error for the shift in one candidate’s lead between two polls will be slightly less than 3 times the margin of error for a single candidate that pollsters usually report. When the two surveys have different margins of error, the calculation is more complicated. More than a specific formula, the main thing to keep in mind is that changes in a candidate’s lead from one survey to the next have much more variability than many people realize.
Anonymous • 3 weeks ago
I find one thing troubling. Besides the sample size, the margin of error is influenced by the pq relationship. Is it 5050 or something like 937 (or 793)? The reported margin of error should be called the “maximum margin of error.” The +/ 3 percentage points reported for a candidate at an estimate of 50% in a survey of 1,000 is not correct for the candidate at, say, 6%. In a review like this, I feel this is more important, and more accessible to the general reader, than a discussion of the effects of weighting. Survey statisticians and journalists omit discussion of the pq relationship AND the fact that the theoretical foundation of margin of error calculations relies on an assumption of 100% response rates (instead of the 10 or 15% typical today) — notwithstanding the argument by a well known late practitioner that the response rate is immaterial if those responding are assumed to be “random sample” of the actual full sample. James P. Murphy – Stuart, Fla.
Andrew Mercer • 3 weeks ago
It is true that percentages closer to 0 or 100% have smaller margins of error. Pollsters report the margin of error for an estimate of 50% because it is the most conservative, and for most elections featuring two candidates, the levels of support tend to be reasonably close to 50% so the standard MOE is usually a pretty good approximation. Given all of the other kinds of error besides sampling that can affect survey estimates, it doesn’t hurt to err on the side of assuming a larger interval. The reason it’s so important to account for the effects of weighting when calculating the margin of error is precisely so that we do not assume that respondents are a random sample of the actual full sample. Weighting adjusts for known differences between respondents and nonrespondents, but it can have substantial effects on precision.