Why and how we’re weighting surveys for past presidential vote
This piece explains why, when and how we are weighting our surveys on Americans’ past vote.
Numbers, Facts and Trends Shaping Your World
Research Methodologist
Arnold Lau is a research methodologist focusing on survey methodology.
This piece explains why, when and how we are weighting our surveys on Americans’ past vote.
Pew Research Center conducted a study to compare the accuracy of six online surveys of U.S. adults – three from probability-based panels and three from opt-in sources. On average, the absolute error on opt-in samples was about twice that of probability-based panels.
In this piece, we demonstrate how to conduct age-period-cohort analysis, a statistical tool, to determine the effects of generation.
National polls like the Center’s come within a few percentage points, on average, of benchmarks from high response rate federal surveys.
About two-thirds of Americans (65%) say their best guess is that intelligent life exists on other planets.
A new evaluation of the Center’s national American Trends Panel finds little evidence that panel estimates are affected by errors associated with panel conditioning, a phenomenon that occurs when survey participation changes respondents’ true or reported behavior over time.
Looking at final estimates of the outcome of the 2020 U.S. presidential race, 93% of national polls overstated the Democratic candidate’s support among voters, while nearly as many (88%) did so in 2016.
Given the errors in 2016 and 2020 election polling, how much should we trust polls that attempt to measure opinions on issues?
In this post, we examine whether online opt-in or “nonprobability” surveys are consistent in the same ways as probability-based surveys.
This post walks through the process of weighting and analyzing a survey dataset.
Notifications