For a recent study on automated accounts and Twitter, we had to answer a fundamental question: Which accounts are bots and which accounts aren’t? Read a Q&A with Stefan Wojcik, a computational social scientist at the Center and one of the report’s authors, on how he and his colleagues navigated this question.
U.S. adults are mostly against government action that could limit people’s ability to access and publish information online. There is more support for steps by technology companies.
Read key findings and watch a video about our new study on how bot accounts affect the mix of content on Twitter.
An estimated two-thirds of tweeted links to popular websites are posted by automated accounts – not human beings.
People in 38 countries were asked how often they use the internet – as well as how often they use social networking sites like Facebook, Twitter and other sites – to get news. Specifically, they were asked whether they did each activity several times a day, once a day, several times a week, once a […]
Predictions from experts about truth and misinformation online in 2027, from @pewresearch and @ImagineInternet.
Experts are split on whether the coming years will see less misinformation online. Those who foresee improvement hope for technological and societal solutions. Others say bad actors using technology can exploit human vulnerabilities.
People deal in varying ways with tensions about what information to trust and how much they want to learn. Some are interested and engaged with information; others are wary and stressed.
Many experts say lack of trust won't hinder increased public reliance on the internet. Some expect trust to grow as tech and regulatory changes arise; others think it will worsen or maybe change entirely.
Lee Rainie discussed the Center's latest findings about how people use social media, how they think about news in the Trump Era, how they try to establish and act on trust and where they turn for expertise in a period where so much information is contested.