---
title: "Q&amp;A: Do AI and bogus respondents threaten polling’s future?"
description: "Courtney Kennedy, vice president of methods and innovation, answers some common questions about the current polling landscape in the U.S."
date: "2026-05-12"
authors:
  - name: "John Gramlich"
    job_title: "Associate Director, Short Reads"
    link: "https://www.pewresearch.org/staff/john-gramlich/"
url: "https://www.pewresearch.org/short-reads/2026/05/12/qa-do-ai-and-bogus-respondents-threaten-pollings-future/"
categories:
  - "Artificial Intelligence"
  - "Methodological Research"
  - "Online Surveys"
  - "Research Explainers"
---

# Q&amp;A: Do AI and bogus respondents threaten polling’s future?

At first glance, recent news stories may appear to spell doom for the polling industry.

Instead of asking real people for their opinions, [some companies](https://futurism.com/artificial-intelligence/ai-polls-silicon-sampling) are [asking artificial intelligence](https://www.nytimes.com/2026/04/06/opinion/ai-polling.html) what [people *would *think](https://www.motherjones.com/politics/2026/03/polling-artificial-intelligence-democracy-market-research-ai-surveys/). In other cases, bad actors are using AI to [fake survey responses at scale](https://www.pnas.org/doi/10.1073/pnas.2537420123). Some polls also face [serious data quality issues](https://www.pewresearch.org/short-reads/2024/03/05/online-opt-in-polls-can-produce-misleading-results-especially-for-young-people-and-hispanic-adults/) because of [bogus respondents](https://www.pewresearch.org/methods/2020/02/18/assessing-the-risks-to-online-polls-from-bogus-respondents/) who don’t take surveys in good faith.

It can be difficult for people outside the polling industry to understand what’s going on and which polls to trust. In this Q&A, Courtney Kennedy, Pew Research Center’s vice president of methods and innovation, answers some common questions about the current polling landscape in the United States.

#### **Some companies are now asking AI what the public thinks instead of asking people themselves – a practice sometimes called “silicon sampling.” Does Pew Research Center do that?**

![](https://www.pewresearch.org/wp-content/uploads/sites/20/2022/02/Courtney_Kennedy-jpg.webp?w=640)
*Courtney Kennedy, Pew Research Center vice president of methods and innovation*

No. We only interview real people. We don’t use AI to tell us what the public thinks. There are ethical and scientific concerns with using AI to replace humans in public opinion surveys.

#### **What do you see as the main problems with silicon sampling?**

Polling is fundamentally about humans – what they’re thinking and experiencing. Polls give the public a voice in politics, business and other areas. In the political realm, they let leaders know what hardships people are experiencing and what they’d like the government to do differently. If we stop polling people and just assume AI knows the answer, we risk misunderstanding what’s actually happening in the public.

There are scientific concerns, too. Many studies have explored how AI performs as a replacement for interviewing humans. We’ve done some of this research ourselves – for experimental learning only, not for reporting.

These studies have found that AI estimates tend to stereotype groups of people, have a harder time representing Republican viewpoints than Democratic ones, and understate the level of disagreement in public opinion.

In time, AI may more accurately imitate answers real people would give. Even so, a core philosophical point of polling is that we ask real people about their views.

#### **In addition to silicon sampling, bad actors are ****using AI to fake survey responses**** at scale. Is that a problem for Pew Research Center?**

No. That threat applies to “opt-in” surveys. Those are polls that people can proactively sign up to take – for example, by responding to social media ads offering a reward for taking a survey. Since it’s easy to adopt a fake identity online, opt-in polling [opens the door to AI](https://pmc.ncbi.nlm.nih.gov/articles/PMC12933150/) and bad actors seeking to [commit fraud](https://www.justice.gov/usao-nh/pr/eight-defendants-indicted-international-conspiracy-bill-10-million-fraudulent-market).

We don’t use opt-in surveys at Pew Research Center. We use probability-based sampling instead. That means we select real people in real life, not online. We start with a giant list of all U.S. home addresses and randomly select some of them. We initially contact people by snail mail, and each year we invite only a carefully selected sample of the public to take our surveys. Any one person’s chances of being selected are tiny, and you can’t self-enroll or nominate yourself to take our surveys. That means bad actors don’t have the ability to self-select into our panel.

#### **Can’t the people who take Pew Research Center polls use AI to answer questions?**

They could, but what matters is scale. Since anyone can sign up for opt-in surveys, bad actors can create [multiple fake accounts](https://www.justice.gov/usao-nh/pr/eight-defendants-indicted-international-conspiracy-bill-10-million-fraudulent-market) and take dozens or even hundreds of surveys each day to maximize the financial rewards they get.

Let’s say someone creates five AI bot accounts online and uses them to take 200 surveys a day, with a reward of $1 per survey. That person could hypothetically haul in $30,000 a month by using AI to speedily complete lots of opt-in surveys.

With a probability panel [like the one we use at Pew Research Center](https://www.pewresearch.org/the-american-trends-panel/), that kind of large-scale fraud isn’t possible. You can’t create multiple accounts or take surveys all day. Our respondents each have a single account, take an average of fewer than two surveys per month and receive an average of $11 per survey. Someone using AI to answer our surveys could hypothetically earn $22 per month – not exactly a huge payday for cheating.

Would someone intent on defrauding pollsters want to make $30,000 per month, or $22?

#### **What about bogus respondents? What are they, and what threat do they present to polling?**

Bogus respondents are survey-takers who make no effort to answer questions truthfully and instead try to complete surveys as quickly as possible to get monetary rewards. One hallmark of bogus respondents is that they tend to give positive answers, like “yes” or “approve.” This tendency of bogus respondents has led to [false conclusions](https://orthodoxstudies.substack.com/p/what-the-quiet-revival-collapse-can) and news organizations needing to [retract stories](https://www.economist.com/united-states/2023/12/07/one-in-five-young-americans-thinks-the-holocaust-is-a-myth) based on [opt-in polls](https://www.usnews.com/news/health-news/articles/2023-07-13/did-americans-actually-drink-bleach-during-the-covid-19-pandemic).

The fundamental reason bogus respondents exist is because opt-in surveys generally encourage anyone interested to [sign up](https://account.yougov.com/gb-en/join/new/intro). Rigorous surveys do the opposite – the researcher carefully recruits people. People can’t just sign up on their own.

#### **Are opt-in surveys always problematic?**

No. In some cases, opt-in polls can produce results similar to those produced by probability-based polls. If the poll is only being used to measure the opinion of all adults on a given topic – say, approval or disapproval of the president – opt-in polls and probability-based ones may give similar results.

However, opt-in polls have been shown to generate erroneous data for [young adults](https://www.pewresearch.org/short-reads/2024/03/05/online-opt-in-polls-can-produce-misleading-results-especially-for-young-people-and-hispanic-adults/sr_24-03-04_opt-in-polls_2-png/) and estimates of relatively [rare behaviors](https://www.usnews.com/news/health-news/articles/2023-07-13/did-americans-actually-drink-bleach-during-the-covid-19-pandemic). Those rare behaviors include things like belief in [conspiracy theories](https://www.economist.com/united-states/2023/12/07/one-in-five-young-americans-thinks-the-holocaust-is-a-myth), identifying with an [Orthodox Christian church](https://orthodoxstudies.substack.com/p/what-the-quiet-revival-collapse-can), [military service](https://www.cambridge.org/core/journals/journal-of-experimental-political-science/article/fraud-in-online-surveys-evidence-from-a-nonprobability-subpopulation-sample/52CCFB8B9FEFC4C11155BE256F6D9116) and support for [political violence](https://pmc.ncbi.nlm.nih.gov/articles/PMC8944847/).

#### **So are probability polls always trustworthy?**

Not necessarily. It’s important to assess how respondents were recruited, but it’s not all that matters. For example, in recent elections, there have been examples of probability-based polls that were not weighted properly and, as a result, were way off.

To be trustworthy, a poll has to be well-designed from start to finish. Using a probability-based sample is the best possible start, but other practices are important, too.

#### **Probability polls like the ones done by Pew Research Center tend to be more expensive than opt-in polls. Why is that?**

Our surveys are more expensive because getting participation from a randomly selected sample of Americans is time- and labor-intensive. We recruit people offline, in real life, via letters mailed to home addresses. We use random sampling so that nearly all U.S. adults have a chance of being selected for our surveys. And we allow people to answer our questions by web or phone (because research shows that some groups of Americans are reluctant to take surveys online). All those efforts to be rigorous cost money.