Q. On “news” sites, one reads daily that “43% surveyed think this,” or “72% of …. name the group” are doing this. Yet, when one performs due diligence, the statistic recedes into non-importance, because so many surveys involve, say, 1,146 respondents. With a population approaching 300 million, how can any responsible news source report such insignificant data?

A lot of people share your skepticism about sampling. It is not intuitively easy to grasp how a very small sample of a very large population can be accurate. But pollsters have a stock (if smart-alec) reply: If you don’t believe in random sampling, ask your doctor to take all of your blood next time you need a blood test. Indeed, sampling is used in many fields — by accountants looking for fraud, medical researchers, even manufacturers doing quality control checks on their products. The key for survey sampling is that every person in the population (in our case, adults living in the U.S.) has a chance of being included, and that pollsters have a way to calculate that chance. Our samples are constructed in such a way that nearly every telephone in the U.S. — cell phones as well as landlines — has an equal chance of being included. This permits us to put a margin of likely error on our findings and to say how confident we are in the result.

But all of this great statistical theory would be for naught if we could not demonstrate that our polls are accurate. One of the toughest tests comes in elections: do the polls accurately predict the outcome? The answer is yes. In 2004, Pew Research’s final pre-election poll estimated that President Bush would get 51% of the vote and that John Kerry would get 48%, exactly the final margin. In 2008, our final estimate missed the actual result by only one percentage point. As proud as we are of our accuracy, many other national polls did well in both years. Indeed, the average for all national polls in both years came very close to the final margins.

Scott Keeter, Director of Survey Research, Pew Research Center