Numbers, Facts and Trends Shaping Your World

The Final Days of the Media Campaign 2012

Methodology

About This Study

A number of people at the Pew Research Center’s Project for Excellence in Journalism worked on PEJ’s “The Final Days of the Media Campaign 2012.” Director Tom Rosenstiel, Deputy Director Amy Mitchell, Associate Director Mark Jurkowitz and senior researcher Paul Hitlin wrote the report. Paul Hitlin supervised the content analysis components. Additional coding and data analysis were done by Researchers Steve Adams, Monica Anderson, Heather Brown, Laura Santhanam and Sovini Tan. Nancy Vogt worked on the computer coding. Katerina Matsa created the charts. Jesse Holcomb copy edited the report. Dana Page handles communications for the project.

Methodology

This special report by the Pew Research Center’s Project for Excellence in Journalism on media coverage of the 2012 presidential campaign uses data derived from two different methodologies. Data regarding the tone of coverage in the mainstream press were derived from the Project for Excellence in Journalism’s in-house coding operation. (Click here for details on how that project, also known as PEJ’s News Coverage Index, is conducted.)

Data regarding the tone of conversation on social media (Twitter, Facebook and blogs) and how the platforms were used on Election Day were derived from a combination of PEJ’s traditional media research methods, based on long-standing rules regarding content analysis, along with computer coding software developed by Crimson Hexagon. That software is able to analyze the textual content from millions of posts on social media platforms. Crimson Hexagon (CH) classifies online content by identifying statistical patterns in words.

This study is a follow-up report to one released on November 2, which was based on data collected from August 27 through October 21, 2012. This report adds new data from the final 15 days of the campaign, October 22 through November 5, 2012, along with a special look at social media on Election Day itself, November 6.

Human Coding of Mainstream Media

Sample Design

The mainstream media content was based on coverage originally captured as part of PEJ’s weekly News Coverage Index (NCI). 

Each week, the NCI examines the coverage from 52 outlets in five media sectors, including newspapers, online news, network TV, cable TV, and radio. Following a system of rotation, between 25 and 28 outlets each weekday are studied as well as 3 newspapers each Sunday.

For this particular study of campaign coverage, three commercial talk radio programs were not included. In addition, broadcast stories that were 30 seconds or less were also excluded.

In total, the 49 media outlets examined for this campaign study were as follows:

Newspapers (Eleven in all)

Coded two out of these four every weekday; one on Sunday
The New York Times
Los Angeles Times
USA Today
The Wall Street Journal

Coded two out of these four every weekday; one on Sunday
The Washington Post
The Denver Post
Houston Chronicle
Orlando Sentinel

Coded one out of these three every weekday and Sunday
Traverse City Record-Eagle (MI)
The Daily Herald (WA)
The Eagle-Tribune (MA)

Web sites (Coded six of twelve each weekday)

Yahoo News
MSNBC.com
CNN.com
NYTimes.com
Google News
FoxNews.com

ABCNews.com
USAToday.com
WashingtonPost.com
LATimes.com
HuffingtonPost.com
Wall Street Journal Online

Network TV (Seven in all, Monday-Friday)

Morning shows – coded one or two every weekday
ABC – Good Morning America
CBS – Early Show
NBC – Today

Evening news – coded two of three every weekday
ABC – World News Tonight
CBS – CBS Evening News
NBC – NBC Nightly News

Coded two consecutive days, then skip one
PBS – NewsHour

Cable TV (Fifteen in all, Monday-Friday)

Daytime (2:00 to 2:30 pm) coded two out of three every weekday
CNN
Fox News
MSNBC

Nighttime CNN – coded one or two out of the four every day

Situation Room (5 pm)
Situation Room (6 pm)
Erin Burnett OutFront
Anderson Cooper 360

Nighttime Fox News – coded two out of the four every day
Special Report w/ Bret Baier
Fox Report w/ Shepard Smith
O’Reilly Factor
Hannity

Nighttime MSNBC – coded one or two out of the four every day
PoliticsNation
Hardball (7 pm)
The Rachel Maddow Show
The Ed Show

Radio (Seven in all, Monday-Friday)

NPR – Coded one of the two every weekday

Morning Edition

All Things Considered

Radio News
ABC Headlines

CBS Headlines

From that sample, the study included all relevant stories:

  • On the front page of newspapers
  • In the entirety of commercial network evening newscasts and radio headline segments
  • In the first 30 minutes of network morning news and all cable programs
  • In a 30 minute segment of NPR’s broadcasts or PBS’ NewsHour (rotated between the first and second half of the programs)
  • The top 5 stories on each website at the time of capture

Click here for the full methodology regarding the News Coverage Index and the justification for the choices of outlets studied.

Sample Selection

To arrive at the sample for this particular study of campaign coverage, we gathered all relevant stories from August 27-November 5, 2012, that were either coded as campaign stories, meaning that 50% or more of the story was devoted to discussion of the ongoing presidential campaign, or included President Obama, Governor Romney, Vice President Biden or Congressman Paul Ryan in at least 25% of the story.

This process resulted in a sample of 3,117 stories (660 of those stories came from the final 15 days of the campaign). Of those, 2,823 stories focused on the presidential election while 294 focused on another topic, such as the events in Libya or Hurricane Sandy, but included one of the figures as a significant presence.

Note: The sample of 3,117 stories was used for all data regarding the tone of coverage for each candidate. For one section where the overall framing of campaign coverage is discussed in terms of newshole, the sample was made up of 3,689 stories and included talk radio stories and those 30 seconds or less.

Coding of Mainstream Press Campaign Stories for Tone

The data in this study derived from PEJ’s regular Index coding was created by a team of seven experienced coders. We have tested all of the variables derived from the regular weekly Index coding and all the variables reached a level of agreement of 80% or higher. For specific information about those tests, see the methodology section for the NCI.

The method of measuring tone was the same that had been used in previous PEJ studies, including the 2008 studies, in order to provide accurate longitudinal comparisons.

Unit of Analysis

The unit of analysis for this study was the story. Each story was coded for tone for each of the four candidates. If a candidate did not appear in at least 25% of the story, they were not considered a significant figure in the story and where therefore coded as “n/a” for not having a significant presence.

Tone Variable

The tone variable measures whether a story’s tone is constructed in a way, via use of quotes, assertions, or innuendo, which results in positive, neutral, or negative coverage for the primary figure as it relates to the topic of the story. While reading or listening to a story, coders tallied up all the assertions in a story that have either a negative or positive tone to the reporting. Direct and indirect quotes were counted along with assertions made by journalists themselves.

In order for a story to be coded as either “positive” or “negative,” it must have either 1.5 times the amount of positive comments to negative comments, or 1.5 times the amount of negative comments to positive comments. If the headline or lead has a positive or negative tone, it was counted twice into the total value. Also counted twice for tone were the first three paragraphs or first four sentences, whichever came first.

Any story in which the ratio of positive to negative comments was less than 1.5 to 1 was considered a “neutral” or “mixed” story.

In some previous studies, PEJ used a ratio of 2 to 1 instead of 1.5 to 1 in determining the overall tone of news stories.

The 2:1 ratio makes sets the bar even higher for a story to be coded as either positive or negative overall. Prior to the 2008 election campaign, PEJ reviewed and retested both the 2:1 ratio and the 1.5 to 1 ratio. We also consulted with several leading scholars in content analysis methods. First, we found only minor shifts in the overall outcome of stories. Indeed, past content studies in which we coded using both ratios, the overall relationship of positive to negative stories changed very little. The bigger difference was in an increase in mixed or neutral stories. In our pre-tests in 2007, the Project felt that the 1.5 to 1 ratio more precisely represented the overall tone of the stories. The academics consulted concurred.

Coding Process

Testing of all variables used to determine campaign stories has shown levels of agreement of 80% or higher. For specific information about those tests, see the methodology on intercoder testing.

During coder training for this particular study, intercoder reliability tests were conducted for all the campaign-specific variables. There were two different intercoder tests conducted to assure reliability.

For this study, each of the seven coders were trained on the tone coding methodology and then were given the same set of 30 stories to code for tone for each of the four candidates. The rate of intercoder reliability agreement was 82%.

Coding of the Tone on Social Media Using a Computer Algorithm

The sections of this report that dealt with the social media reaction to the campaign employed media research methods that combine PEJ’s content analysis rules developed over more than a decade with computer coding software developed by Crimson Hexagon. The portions of this report that focused on the final 15 days were based on separate examinations of more than 10 million tweets, 130,000 blog posts and 210,000 Facebook posts.

Crimson Hexagon is a software platform that identifies statistical patterns in words used in online texts. Researchers enter key terms using Boolean search logic so the software can identify relevant material to analyze. PEJ draws its analysis samples from several million blogs, all public Twitter posts and a random sample of publicly available Facebook posts. Then a researcher trains the software to classify documents using examples from those collected posts. Finally, the software classifies the rest of the online content according to the patterns derived during the training.  

According to Crimson Hexagon: “Our technology analyzes the entire social internet (blog posts, forum messages, Tweets, etc.) by identifying statistical patterns in the words used to express opinions on different topics.”  Information on the tool itself can be found at http://www.crimsonhexagon.com/ and the in-depth methodologies can be found here http://www.crimsonhexagon.com/products/whitepapers/.

Crimson Hexagon measures text in the aggregate and the unit of measure is the ‘statement’ or assertion, not the post or Tweet. One post or Tweet can contain more than one statement if multiple ideas are expressed. The results are determined as a percentage of the overall conversation.

Monitor Creation and Training

Each individual study or query related to a set of variables is referred to as a “monitor.”

The process of creating a new monitor consists of four steps. There were six monitors created for this study – three for Obama (Twitter, blogs and Facebook) and three for Romney (Twitter, blogs and Facebook).

First, PEJ researchers decide what timeframe and universe of content to examine. The timeframe for this study was August 27-November 5, 2012. PEJ only includes English-language content.

Second, the researchers enter key terms using Boolean search logic so the software can identify the universe of posts to analyze. For each of these monitors, the Boolean search terms simply consisted of the candidate’s last name (“Obama” or “Romney”).

Next, researchers define categories appropriate to the parameters of the study. For tone monitors, there are four categories: positive, neutral, negative, and irrelevant for posts that are off-topic.

Fourth, researchers “train” the CH platform to analyze content according to specific parameters they want to study. The PEJ researchers in this role have gone through in-depth training at two different levels. They are professional content analysts fully versed in PEJ’s existing content analysis operation and methodology. They then undergo specific training on the CH platform including multiple rounds of reliability testing.

The monitor training itself is done with a random selection of posts collected by the technology. One at a time, the software displays posts and a human coder determines which category each example best fits into. In categorizing the content, PEJ staff follows coding rules created over the many years that PEJ has been content analyzing the news media. If an example does not fit easily into a category, that specific post is skipped. The goal of this training is to feed the software with clear examples for every category.

For each new monitor, human coders categorize at least 250 distinct posts. Typically, each individual category includes 20 or more posts before the training is complete. To validate the training, PEJ has conducted numerous intercoder reliability tests (see below) and the training of every monitor is examined by a second coder in order to discover errors.

The training process consists of researchers showing the algorithm stories in their entirety that are unambiguous in tone. Once the training is complete, the algorithm analyzes content at the assertion level, to ensure that the meaning is similarly unambiguous. This makes it possible to analyze and proportion content that contains assertions of differing tone. This classification is done by applying statistical word patterns derived from posts categorized by human coders during the training process.

The monitors are then reviewed by a second coder to ensure there is agreement. Any questionable posts are removed from the sample.

Ongoing Monitors

In the analysis of campaign coverage, PEJ uses CH to study a given period of time, and then expands the monitor for additional time going forward. In order to accomplish this, researchers first create a monitor for the original timeframe according to the method described above.

Because the tenor and content of online conversation can change over time, additional training is necessary when the timeframe gets extended. Since the specific conversation about candidates evolves all the time, the CH monitor must be trained to understand how newer posts fit into the larger categories.

Each week, researchers remove any documents which are more than three weeks old. For example, for the monitor the week of October 22-28, 2012, there will be no documents from before October 8. This ensures that older storylines no longer playing in the news cycle will be removed and the algorithm will be working with only the newest material.

Second, each week trainers add more stories to the training sample to ensure that the changes in the storyline are accurately reflected in the algorithm. PEJ researchers add, at a minimum, 10 new training documents to each category. This results in many categories receiving much more than the 10 new documents. On average, researchers will add roughly 60 new training documents each week.

How the Algorithm Works

To understand how the software recognizes and uses patterns of words to interpret texts, consider a simplified example regarding an examination of the tone of coverage regarding Mitt Romney. As a result of the example stories categorized by a human coder during the training, the CH monitor might recognize that portions of a story with the words “Romney,” “poll” and “increase” near each other are likely positive for Romney. However, a section that includes the words “Romney,” “losing” and “women” is likely to be negative for Romney.

Unlike most human coding, CH monitors do not measure each story as a unit, but examine the entire discussion in the aggregate. To do that, the algorithm breaks up all relevant texts into subsections. Rather than dividing each story, paragraph, sentence or word, CH treats the “assertion” as the unit of measurement. Thus, posts are divided up by the computer algorithm. If 40% of a post fits into one category, and 60% fits into another, the software will divide the text accordingly. Consequently, the results are not expressed in percent of newshole or percent of posts. Instead, the results are the percent of assertions out of the entire body of stories identified by the original Boolean search terms. We refer to the entire collection of assertions as the “conversation.”

Testing and Validity

Extensive testing by Crimson Hexagon has demonstrated that the tool is 97% reliable, that is, in 97% of cases analyzed, the technology’s coding has been shown to match human coding. PEJ spent more than 12 months testing CH, and our own tests comparing coding by humans and the software came up with similar results.

In addition to validity tests of the platform itself, PEJ conducted separate examinations of human intercoder reliability to show that the training process for complex concepts is replicable. The first test had five researchers each code the same 30 stories which resulted in an agreement of 85%.

A second test had each of the five researchers build their own separate monitors to see how the results compared. This test involved not only testing coder agreement, but also how the algorithm handles various examinations of the same content when different human trainers are working on the same subject. The five separate monitors came up with results that were within 85% of each other.

Unlike polling data, the results from the CH tool do not have a sampling margin of error since there is no sampling involved. For the algorithmic tool, reliability tested at 97% meets the highest standards of academic rigor.

Coding of Social Media Usage on Election Day Using Computer Algorithms

For the section on how social media were used on Election Day, three separate Crimson Hexagon monitors were created (one for each Twitter, blogs and Facebook). The results were based on separate examinations of more than 32 million tweets, 27,000 blog posts and 210,000 Facebook posts.

The time frame for the analysis is November 6 6:00 a.m. EST through November 7 6:00 a.m. EST, 2012.

PEJ used Boolean searches to narrow the universe to relevant posts. Common terminology posted by users varies for each platform. Therefore, PEJ used slightly different search filters for each.

Since much of the Election Day social media conversation did not include contextual words that gave an indication that the post was about the election, PEJ came up with an extensive list of keywords to use on Twitter and Facebook to collect as many relevant posts as possible.

For blogs, PEJ used the following search filter:

(Barack OR Obama OR Mitt OR Romney)

For Twitter and Facebook, the more extensive search filter was:

(4more OR ABC OR America OR Anderson OR Cooper OR Sullivan OR Baier OR ballet OR Barack OR battleground OR bailout OR Beck OR Biden OR BigBird OR Binder OR black OR Blitzer OR bloc OR blue OR Scheiffer OR Breitbart OR Williams OR Hume OR CA OR cable OR call OR campaign OR candidate OR Crowley OR Carville OR CBS OR Hayes OR Matthews OR Todd OR close OR CNN OR Colbert OR CO OR Colorado OR concede OR congress OR conservative OR constituency OR Coulter OR country OR coverage OR Milbank OR Brooks OR Gregory OR Dem OR demo OR Brazile OR Drudge OR economy OR Henry OR elect OR election OR electoral OR Burnett OR Klein OR female OR FL OR Florida OR Fox OR Gingrich OR Borger OR GOP OR GOTV OR Ifill OR Hannity OR Perry OR Hispanic OR Kurtz OR Ingraham OR Iowa OR Tapper OR Yellin OR Stein OR King OR Stewart OR Williams OR Woodruff OR Karl OR Krauthammer OR Krugman OR Latino OR liberal OR Libya OR Limbaugh OR line OR lose OR lost OR Maddow OR Malkin OR map OR Raddatz OR Matalin OR Press OR Kelly OR Allen OR Mitt OR Mormon OR Morris OR MSNBC OR Muslin OR NBC OR Network OR NPR OR Obama OR Osama OR Ohio OR O’Reilly OR PA OR Begala OR PBS OR Penn OR Pennsylvania OR Morgan OR poll OR POTUS OR president OR pres OR prez OR Pundit OR Red OR Rep OR Republican OR Wolffe OR Martin OR Romney OR Rove OR Ryan OR Sawyer OR Scarborough OR Schultz OR Pelley OR Sharpton OR Silver OR speech OR state OR Stelter OR swing OR tcot OR term OR Brokaw OR Trump OR USA OR victory OR VA OR Virginia OR vote OR white OR win OR Wisconsin OR women OR woman OR won OR Zakaria)

Sign up for The Briefing

Weekly updates on the world of news & information