Courtney Kennedy is director of survey research at Pew Research Center. In this role, she serves as the chief survey methodologist for the Center, providing guidance on all of its research and leading its methodology work. Prior to joining Pew Research Center, Kennedy served as vice president of the advanced methods group at Abt SRBI, where she was responsible for designing complex surveys, developing data collection methodologies and assessing data quality. Her work has been published in Public Opinion Quarterly, the Journal of Survey Statistics and Methodology and the Journal of Official Statistics. She has worked as a statistical consultant on the U.S. Census Bureau’s decennial census and on multiple reports appearing in Newsweek. Kennedy has a doctorate from the University of Michigan and a master’s degree from the University of Maryland, both in survey methodology. She received her bachelor’s degree from the University of Michigan. Kennedy has served as standards chair of the American Association for Public Opinion Research and regularly serves as an election night exit poll analyst for NBC News.
Phone vs. online surveys: Why do respondents’ answers sometimes differ by mode?
Pew Research Center conducts surveys over the phone and, increasingly, online. But these two formats don’t always produce identical results.
Comparing Survey Sampling Strategies: Random-Digit Dial vs. Voter Files
A new telephone survey experiment finds that an opinion poll drawn from a commercial voter file produces results similar to those from a sample based on random-digit dialing.
What are nonprobability surveys?
Many online surveys are conducted using “nonprobability” or “opt-in” samples, which are generally easier and cheaper to conduct. In our latest Methods 101 video, we explore some of the features of nonprobability surveys and how they differ from traditional probability-based polls.
Can we still trust polls?
Donald Trump’s victory in 2016 and the U.K. “Brexit” decision rattled public confidence in polls. Our new video explains why well-designed polls can be trusted.
How do you write survey questions that accurately measure public opinion?
In the second video from our Methods 101 series, we’re tackling why question wording is so important in public opinion surveys.
Personal finance questions elicit slightly different answers in phone surveys than online
People polled by telephone are slightly less likely than those interviewed online to say their personal finances are in “poor shape.”
How can a survey of 1,000 people tell you what the whole U.S. thinks?
The first video in our “Methods 101” series is about random sampling, a concept that undergirds all probability-based survey research. Here’s how it works.
A basic question when reading a poll: Does it include or exclude nonvoters?
Opinion polls in the U.S. can address the same topic yet reach very different results. There are several reasons this can happen, but we tackle one of the most basic: Did the poll include or exclude the 45% who didn’t vote in November?
Flashpoints in Polling
Many people wonder: Can polls be trusted? The following essay contains a big-picture review of the state of polling, organized around a number of key areas.
What we learned about online nonprobability polls
The advantages of these online surveys are obvious – they are fast and relatively inexpensive, and the technology for them is pervasive. But are they accurate?