Numbers, Facts and Trends Shaping Your World

Q&A: How Pew Research Center studied press coverage of the Biden administration’s early days

Ever since Franklin D. Roosevelt’s historic “100 days” that started the United States’ long recovery from the Great Depression, the press has used a new administration’s first 100 days in power as a marker to assess its direction and effectiveness. And for nearly three decades, Pew Research Center has assessed the assessors, analyzing media coverage of each administration’s early days (starting in 1993). Today, the Center releases its report on coverage of the Biden administration.

This is the latest report in Pew Research Center’s ongoing investigation of the state of news, information and journalism in the digital age, a research program funded by The Pew Charitable Trusts, with generous support from the John S. and James L. Knight Foundation.

We spoke with Amy Mitchell, Pew Research Center’s director of journalism research, and Katerina Eva Matsa, associate director of journalism research, about what they learned and how this year’s report differs from previous ones. Their responses have been edited for clarity and concision.

Amy Mitchell, director of journalism research
Amy Mitchell, director of journalism research
Katerina Eva Matsa, associate director of journalism research
Katerina Eva Matsa, associate director of journalism research

Amy, you’ve done versions of this kind of assessment for the past four administrations, going back to Bill Clinton in 1993. Why, this time, did you decide to include a public opinion survey along with your content analysis?

Mitchell: Well, the report hasn’t been static all that time – we’ve added new elements here and there throughout the years. This year, we thought it would be valuable to combine our study of news coverage itself with data on people’s views about, and exposure to, that coverage. And those don’t always paint the same picture. For example, while the coverage we studied was slightly more negative than positive, the public was most likely to say the news they were seeing about the Biden administration was mostly positive.

What made this dual analysis even more valuable is that we were also able to place the public into “media diet” groups, similar to the way we group news outlets by audience type. This allowed us to compare coverage from outlets with audiences who lean to the right or left politically with the survey responses of U.S. adults who said they only turned to those outlets for political news.

Why did you decide to focus your analysis on coverage of the first 60 days of Joe Biden’s presidency, rather than 100 days as you did with the Trump administration?

Mitchell: There were a number of factors in that decision. First, the full 100-day study of the Trump administration was actually the exception. Our first three studies – the early days of news coverage of the Clinton, Bush and Obama presidencies – all studied the first 60 days to give us time to complete and release the report on or near the 100th day. In 2017, we added more news outlets and decided to study the full 100 days. However, that meant we couldn’t actually publish the report until several months after the 100-day mark. With that in mind, this year we decided to retain a larger mix of outlets (so as to best represent the increasingly varied news landscape) but go back to a 60-day study, so we’d be able to release the report on the 100th day.

This year, we thought it would be valuable to combine our study of news coverage itself with data on people’s views about, and exposure to, that coverage. And those don’t always paint the same picture.

Even so, we wanted to make sure the Biden results could be compared with the Trump results despite the difference in timespans. So, our researchers assessed how similar or different the 2017 results would have been if the Trump analysis had been limited to the first 60 days. They found some minor differences, but nothing that would have changed the overall conclusions.   

How did you select these particular 25 news outlets for analysis? Was it strictly by audience size or circulation?

Mitchell: The selection process was more complex than that. We sought to include a range of outlets across the various platform types: network TV, cable TV, radio, digital news sites, and the digital presence of broadcast outlets and print newspapers. Within each of these platform groups, outlets or programs were selected based on the level of political news programming and audience size. That resulted in 25 distinct outlets.

About half of the news outlets you studied are described in the report as having “left-leaning audiences.” Is that just the way things worked out after you applied your selection criteria, or was that intentional?

Mitchell: The ideological orientation of our outlets’ audiences wasn’t an initial factor in the outlet selection process – in fact, we only measured audiences’ orientations after the outlets were selected (much as we did in 2017). But to widen the mix of outlets with right-leaning audiences, we did add two that weren’t in the initial group, Breitbart and the Washington Examiner. We chose them because, of all the possible additions, they ranked highly in metrics such as social media presence and online engagement. In the end, we had 13 outlets whose audiences are left-leaning, six outlets whose audiences are right-leaning and six outlets with more mixed audiences.

One reason for the heavier presence of outlets with left-leaning audiences is that, as we noted in our 2020 Media Polarization report, Republicans have a more compact media ecosystem. They rely to a large degree on a small number of outlets and view many established brands as not trustworthy. Democrats, on the other hand, rely on a wider number of outlets.

Katerina, in this report, news outlets were categorized based on the ideological leanings of their audiences, which you gleaned from the survey component. How did you categorize them in prior years?

Matsa: In much the same way. In 2017, for instance, we categorized 24 news outlets into the same three groups – those whose audience leans to the left politically, those whose audience leans to the right and those appealing to a more mixed audience. We used audience data from two earlier Pew Research Center surveys in which U.S. adults were asked if they regularly got news about the election or politics from each outlet.

In both the 2017 report and the new one, an outlet was classified as left-leaning if its audience included at least two-thirds more liberal Democrats than conservative Republicans; it was classified as right-leaning if the audience had at least two-thirds more conservative Republicans than liberal Democrats. If neither test was met, the outlet was included in the more mixed audience group.

How did you train people to do this sort of content analysis – especially assessing whether a particular story is “positive” or “negative”?

Matsa: There was lots of training! We employed a team of nine coders who were trained specifically for this project. Over a period of nearly three months, we gave all of the coders multiple sets of news stories spanning platform type – digital, TV, radio – to practice coding all of the variables. The training period began before Biden’s inauguration and lasted until about a month after. A senior researcher evaluated all of the training materials, identified places where coders disagreed with each other and developed rules for the coders to follow, so that everyone was coding exactly the same way. Only after we reached agreement internally on how to code the variables did coding of content for the actual study begin. In addition, the team leader checked coders’ accuracy throughout the process.

Our coders reviewed every statement in a story, whether it was made by a source or the reporter him- or herself, and determined if it carried a positive or negative assessment of the president and his administration.

The “positive” or “negative” assessment of a story refers to its overall tone toward the president and the administration’s actions or words. Our coders reviewed every statement in a story, whether it was made by a source or the reporter him- or herself, and determined if it carried a positive or negative assessment of the president and his administration. There needed to be at least twice as many positive as negative statements in a story for it to be considered positive, and vice versa to be considered negative. Otherwise stories were coded as neither positive nor negative.

The content analysis only looked at coverage on weekdays. What was the reason for that?

Matsa: One of the main purposes of the study was to analyze the news stories that Americans are most likely to consume. We know from various audience metrics (online traffic, print circulation, listenership) that weekdays are when most people turn to news coverage. Also, news programming broadcast during the week is often different than the weekend’s programs.

In addition, we wanted to make sure the number of programs and stories was manageable for our project and coding team, and adding weekend content would increase the coding workload. Weekday selection is also consistent with past 100-day studies.

Amy, in the opinion survey, you asked people to assess whether coverage was fair or unfair, accurate or inaccurate, and positive or negative. These are all distinct but not entirely dissimilar concepts. How did you find they were interrelated if, in fact, they were?

Mitchell: You are exactly right – the three concepts overlap but are also distinct from each other. The question of positive, negative or mixed coverage is perhaps the most distinct from the other two. Indeed, there was less consensus than for the others, with 46% saying coverage of the Biden administration was mostly positive, 14% mostly negative and 39% an even mix. On the other hand, large majorities – including close to half or more of each media diet group – said that the coverage has been mostly fair (76%) and accurate (69%).

Large majorities of all three groups – those who say the coverage has been mostly positive, mostly negative or an even mix – say the coverage has been fair.

Interestingly, large majorities of all three groups – those who say the coverage has been mostly positive, mostly negative or an even mix – say the coverage has been fair. What’s more, only those who say they saw mostly negative assessments are more evenly split between saying the coverage was accurate and saying it was inaccurate. The other two groups have majorities who say it has been accurate.

Which of the report’s findings surprised or interested you the most?  

Mitchell: Gosh, there are so many, it’s hard to pick! I guess one set of findings that’s special to the early Biden coverage is the pervasive presence of the coronavirus pandemic. About seven-in-ten stories contained some reference to COVID-19. This impacted coverage in several ways. For one thing, the coverage was more focused on domestic issues than in the past, with the economy and health care the most commonly covered specific topics. COVID-19 was also the only administration priority, among five we asked about, that a majority of U.S. adults (58%) said had received the right amount of coverage.

It’s also important to recognize the differences in both the coverage across different types of media outlets based on their audience makeup and in the news evaluations of people who turn to each type of outlet. For example, even as most Americans overall said the coronavirus has been getting the right amount of coverage, a majority of those who turn only to outlets whose audiences lean to the right politically feel it’s been getting too much coverage. And the priority that most in that group feel has not gotten enough news attention is immigration – a topic that news outlets with right-leaning audiences covered at higher rates than the other news outlet groups.

What did you learn about how early news coverage of presidents has changed over the years, now that you have five presidencies’ worth of data?

Mitchell: While the media landscape has changed dramatically since Bill Clinton’s first inauguration in 1993, the Center has been able to conduct a long-term comparison for each of the recent administrations across a smaller subset of outlets and variables for the first 60 days in office. But interestingly, there hasn’t been a clear pattern of change.

The Center has been able to conduct a long-term comparison for each of the recent administrations across a smaller subset of outlets and variables for the first 60 days in office. But interestingly, there hasn’t been a clear pattern of change.

Instead, what we see is that coverage varies for each administration, with some elements standing out more one year and others standing apart the next. For instance, working with that smaller sample, we found that coverage of the early days of the Biden administration focused more on policy and agenda and less on leadership skills than any of the other administrations, with the exception of George W. Bush in 2001.

We also found that coverage of the Biden administration was more likely to carry neither a positive nor a negative assessment, compared with the first 60 days of the Trump, Obama, Bush and Clinton administrations. The level of positive coverage for Biden (27% in this smaller sample) was roughly on par with that of Bush and Clinton, but lower than for Barack Obama – with all four higher than Donald Trump.

Do this report’s findings tell us anything about media bias?

Mitchell: The report doesn’t study media bias. It does study, within each story, the overall tone of the assessment about the Biden administration – positive or negative – by looking at specific statements about the administration. This is different than bias, as it doesn’t indicate that a journalist or news organization is being one-sided in their view or suggest that one assessment is correct and another incorrect. It simply means that the statements in the piece evaluated the event as a good or bad thing for the administration.