How we did this
This report is the culmination of a yearlong study into Americans’ news consumption habits in an era of rapidly evolving media technology and a look at how researchers can measure those news habits. This report was made possible by The Pew Charitable Trusts, which received support from the John S. and James L. Knight Foundation. Findings from this study are drawn from four major data collection efforts. See full study methodology for details.
First, cognitive interviews were conducted through RTI International in late February and early March 2020. These interviews were designed to obtain qualitative feedback on proposed survey questions and to gain preliminary, in-depth knowledge about the public’s understanding of emerging concepts around news consumption in a digital age. Some lightly edited quotations are included in the report. Second, survey experiments, conducted through Ipsos’ KnowledgePanel “omnibus” surveys on April 17-19 and April 24-25, 2020, were used to test different approaches to measuring news consumption. These questions emerged from discussions with experts in the field as well as data from the cognitive interviews. Topline results from these survey experiments are available here.
Later, the digital activity of Ipsos KnowledgePanel members who gave their consent to participate was recorded from May 16, 2020 to June 15, 2020 via a tracker from RealityMine® that they installed on their mobile devices and/or personal computer (PC). Researchers could then identify how frequently they engaged in various activities around media and news. Finally, survey data were used to compare with the passive data and to analyze Americans’ awareness of different aspects of media and news. This data are pulled from a survey of U.S. adults conducted June 2-11 on Ipsos’ KnowledgePanel. In all, 3,715 panelists took this survey. Survey data for 1,694 of the panelists were used only for comparison with their online activity as recorded in the passive data, while the remaining 2,021 survey respondents did not have their activity tracked and make up the representative general population sample from which survey results are reported. Topline results for this survey are available here.
This study explored Americans’ news consumption across various types of sources. Overall, this report provides a sense of how Americans understand questions about news consumption in the digital age and different ways researchers can measure online news habits. Here are some definitions of key terms used throughout this report:
- Platform: The medium through which news is consumed. Specifically, this report looks at television, radio, print publications, digital devices (smartphones, computers or tablets) and various online sources (news websites or apps, social media, search engines, podcasts and email newsletters).
- Digital platform: News platforms that require someone to use digital technology to consume news content, such as a smartphone or news website.
- Analog platform: Platforms that are not digital, such as a television set, radio or print newspaper.
- Provider: The type of news organization producing news stories. Specifically, this report looks at daily newspapers; cable, local and network TV news; and public and talk radio. (Network TV news includes national news programs airing on the broadcast networks of ABC, CBS, NBC, and PBS [e.g., World News Tonight].)
- New platforms: A newer type of digital platform that consumers can use to access news. Specifically, this report looks at smart speakers, streaming devices (such as a Roku or Fire Stick), smartwatches, push notifications or alerts and internet streaming services (such as Netflix or Hulu).
- News aggregator: A digital news platform that collects news content from existing news organizations and presents them in a single location online. Specifically, this report looks at Google News, Apple News, Flipboard and Pocket.
- Original news reporting: The process where journalists directly consult primary sources in order to develop news content. This is distinguished from aggregation of news from other sources.
- Passive data: Data on participants’ online activity, such as browsing history and links clicked on, which was collected by a tracking software that panelists downloaded to their digital devices, and that automatically tracked their online behaviors.
- Match/mismatch: After having their digital activity tracked for a period of time, participants took a survey that asked about their online news consumption habits. Researchers then compared their responses to these survey questions with the participants’ online activity as captured in the passive data. A “match” occurred if a panelist said they got news a certain number of times in the survey and was also observed getting news that number of times in their passive data. A “mismatch” occurred if a panelist said they got news a certain number of times in the survey but was observed getting news less often in their passive data.
The news media’s transition to digital has brought major upheaval to the industry – including a multitude of new providers and ways to get to news. And just as American news organizations have had to drastically reevaluate their business models, it would make sense that researchers who are trying to measure the U.S. public’s news consumption also need to reexamine the traditional ways they have done so.
In the mid-20th century, when media research came into its own, this task was more straightforward. There were only a few different ways to get news, and all were clearly distinct – print publications, radio or television. But over the past decades, in addition to a plethora of new forms of news (from 24-hour news channels to news websites), many news outlets no longer stay confined to producing content on only one platform. For instance, to meet the growing digital audience, newspapers like The New York Times also produce audio podcasts, which can be heard on radio stations through a smart speaker, and video series, which can be seen on a cable TV network through a streaming device (such as a Roku or Fire Stick). And cable news outlets and other news providers have an active presence on Facebook, YouTube and other social media sites, further blurring the line between platforms. Finally, there is an industry-wide concern that news consumption habits are overestimated in surveys where respondents self-report their behavior.
Given the increasing complexity and interconnectedness of this news landscape and concerns around overreporting of news consumption, Pew Research Center wanted to explore how best to measure news consumption: Where do currently used survey practices still work and where might changes be in order?
This report is the culmination of this effort and is organized into three sections: Chapter 1 looks at the U.S. public’s familiarity with newer concepts related to news; Chapter 2 examines possible ways to improve survey-based measures of news consumption; and Chapter 3 compares survey results to the use of passive data that comes straight from tracking software news consumers downloaded to their digital devices.
Americans are largely familiar with new technologies but often don’t think of them as news sources
In the survey of U.S. adults, there is mixed evidence about the public’s understanding of newer forms of media and news, which has an impact on the topics survey researchers can reasonably ask about. U.S. adults are broadly familiar with technologies like streaming devices or services, podcasts and news alerts. At the same time, though, many do not seem to use most of these for news consumption, and results from the cognitive interviews suggest that many do not even think of these new forms as ways to get news.
Additionally, as news consumers navigate an information environment that includes news aggregators and social media feeds, confusion abounds regarding the original source of reporting. Only 9% of U.S. adults are very confident that they can tell if a news organization does its own reporting, and, when asked to identify which of six sources do this (See Chapter 1), nearly a quarter (23%) could not identify any of them correctly.
Finally, in an era of rapidly changing business models for news organizations, this study finds a need for survey researchers to carefully specify what they mean by “paying for news.” When asked generally if they pay for news, many people do not seem to think of specific ways that they do pay for news – not to mention the large chunk of Americans who indirectly pay for news, such as through a cable TV subscription.
Possible ways to improve survey questions about news consumption
The findings reveal that, while there is no “silver bullet” for perfect survey measures of news consumption, a series of refinements could drive marginal improvements – such as around the goal of reducing overreporting.
The study tested a number of concepts, including adding a reference period – e.g., “In the past week, how many days did you get news from …” – or examples – e.g., “Daily newspapers (such as The New York Times, Wall Street Journal, or your local daily paper)” – to core survey questions about news consumption. The study found that these two changes largely do not affect estimates of news consumption among the U.S. public overall, although they may make important differences for specific platforms. For instance, a specific reference period appears to get more accurate measures of radio consumption, and examples may help respondents to better understand what is meant by national network TV outlets, which were often confused with cable TV news in cognitive interviews.
Moreover, the study finds that, when asking about how often people consume news, showing the response options in low-to-high order (i.e., starting with “never” and working up to “often,” rather than the reverse) produces no significant differences on individual items but did show a pattern of generally lower estimates of news consumption. And while there is a close correspondence between respondents saying they get news “often” or “rarely” and saying they do so a specific number of days per week, a response of “sometimes” is used to indicate a wide range of news consumption habits. In other words, to one respondent, “sometimes” can mean once a week, and to another, it could mean three times a week or more.
An exploration of the potential to use passive data, gained from software people download to record their activities online, as a direct measurement of the public’s digital news habits – free of the concerns with self-reporting inherent to surveys – shows some promise. Yet there are still too many pitfalls to rely on it for a complete portrait of Americans’ digital news consumption. Estimates coming in from passive data are systematically lower than those from survey questions, with inadequate coverage of devices being one apparent culprit: Most of the respondents who agreed to have their news consumption tracked said that they had additional devices that were not being tracked, and so some of their news consumption was likely not captured.
That is not the only possible issue with passive data, which generally cannot track in-app news consumption (e.g., when someone taps on a link to a news story within a social media app). And a similar measurement from a commercial metrics provider comes in even higher than the estimates from the survey data. This points to one strength of the survey approach: its sources of error are consistent, well-studied, and widely understood, while the sources of error in passive data are, at present, unclear, dependent on the specifics of data collection, and difficult to adjust for.
Survey-based measurement of news consumption is not without its own problems – perhaps foremost among them is people’s tendency to exaggerate their news consumption, consciously or not. The study finds strong evidence of this: Many Americans say that following the news is “very important” to being a good citizen, and those who say this are more likely than others to overestimate their news consumption when their survey responses are compared with passive data tracked on their devices. This suggests that following the news is seen as a “socially desirable” behavior by many people, which may lead them to think aspirationally about their news consumption – i.e., how often they ideally intend to consume the news rather than how often they actually do – when answering survey questions about it.
Still, overall, this yearlong research effort reveals the continued value of survey research – both in and of itself and compared with other options – and indicates ways to further improve data quality. The strength of survey research stands out in particular for the purpose of providing comprehensive and comparable tracking of the public’s news consumption habits over time and capturing a representative slice of the full U.S. adult population as well as demographic subgroups. Further, surveys allow the measurement of multiple different forms of news consumption (not just digital) in the same way, at the same time – and across time. Passive data has useful applications in the consumer world and can be a tool for publishers and others who want a fine-grained picture of user behavior. But the data does not, at present, seem well suited for high-level estimates of news consumption.
It is worth noting that Pew Research Center’s own organizational expertise in survey work may incline its researchers toward a more enthusiastic endorsement of that methodology. But the Center has also long explored and produced news consumption research using other types of data collection, such as tracking the social media habits of a representative sample of U.S. adults, tracking activity in public social media spaces around certain topics, studying aggregated search behavior around news events and making use of commercial metrics. The Center, particularly in the area of news research, looks forward to continuing to explore new data opportunities and further developments of those that have already become available. As we put it on our website: “We continue to search for ways to expand and strengthen the traditional methodologies that underlie survey research and to explore the potential of alternate methods of conducting surveys and measuring public opinion.”
Data sources and methods
This study took a multimodal approach to investigating these questions, drawing on cognitive interviews, split-form survey experiments, comparisons between passive data and self-reported survey data and a full, nationally representative survey. The details of each are provided briefly below.
After an initial round of brainstorming and testing, the formal process began with cognitive interviews conducted among 21 respondents through RTI International. The aim was to get qualitative feedback on the proposed survey questions and to gain some preliminary knowledge on the public’s understanding of emerging concepts around news consumption in a digital age. After RTI staff conducted an expert review of the questionnaire, respondents took a draft version of the full news consumption questionnaire and were probed to talk through their responses, along with some specific probes asking about their understanding of key concepts. These results are included throughout the report for additional context for some findings.
Survey experiments were then conducted on two separate Ipsos KnowledgePanel surveys in April 2020, with roughly 1,000 respondents per survey split randomly across two different forms. The aim was to test different approaches to measuring news consumption (e.g., half were asked how often they get news on television, and half were asked how often in a typical week they get news on television) and to determine which version of certain questions would best reduce the overall incidence of reported news consumption, in light of research that has identified potential overreporting of news consumption in surveys.1 These results can primarily be found in Chapter 2.
Finally, 3,715 members of Ipsos’ KnowledgePanel responded to a custom national survey fielded June 2-11, 2020. Approximately half (N=1,694) had previously consented to have their digital activity tracked on one or more devices. This passive data was compared with their self-reported data from the survey. For instance, they were asked if they used the website or app of The New York Times in the past week, and this was compared with the records of their digital activity. In addition, these passively monitored panelists were compared with the general population sample to help understand the potential for using passive data to measure news consumption. These results can be found in Chapter 3.
The remaining 2,021 respondents were a nationally representative general population sample of U.S. adults who completed the survey, from which data is mainly being used for general point estimates and over-time trend comparisons. Their results can be found in Chapter 1. All respondents took the survey online. Home internet access was provided to adults who did not previously have it during panel recruitment.
For more details, see the methodology.