Q: Each year, Pew Research Center conducts polls in dozens of countries. How does the Center conduct surveys in so many different places?

To conduct its international polls, Pew Research Center works closely with experienced social and political research organizations to identify and manage local polling firms in each country included in our surveys. Working closely with our research partners, we carefully evaluate local vendors and select only those partners that meet our requirements for rigorous fieldwork procedures and thorough quality control.

Our top commitment is to data quality, a multifaceted concept that involves careful attention at every phase of the survey process, from drawing the sample to conducting the interviews to processing a dataset and beyond. This also means constantly staying abreast of emerging technologies and developing best practices.

Q: Once you have selected a local polling partner, what role do you play in conducting the survey?

As a practical matter, Pew Research Center has to rely on research partners to organize and manage field operations. But the Center is involved in every phase of the research. We develop the survey questionnaire, actively collaborate on the sampling plan and carefully evaluate the quality of the data produced. We have found that the protocols governing how interviewers conduct a survey are especially important for ensuring that a survey accurately represents the attitudes of the public and key subgroups, and so we pay a good deal of attention to designing rigorous protocols.

Q: What kind of information do you need to gather to monitor data quality from afar?

We remain in regular contact with our research partners — the professional research firms we hire to help identify, coordinate and oversee the extensive network of local vendors — as well as with the local vendors themselves. We also have a comprehensive process for confirming that a local firm understands and can execute the required sampling plan and fieldwork procedures.

The information we collect to do this includes:

  • a detailed account of the method for choosing respondents (the sample design) and the challenges faced during fieldwork; extensive documentation of the population statistics used to evaluate sample performance, or in less formal terms, an understanding of the population targets that the fieldwork team is trying to hit (e.g., the share of men and women) and how successful they were in that effort. In the U.S., for comparison, these population statistics would be based on the work of the U.S. Census Bureau;
  • records for each interview of the interviewer who conducted it, as well as the supervisor overseeing that interviewer and, if applicable, the person who entered the resulting data permitting a retrospective investigation of unusual patterns in the data;
  • extensive logistical details for each interview, including time of day, location, duration and presence of others; and
  • data on all contacts made to complete the survey, including households that did not respond to the survey invitation and people who refused to participate.

Q: How involved is Pew Research Center in training the interviewers conducting the surveys?

Over time, we have become more involved in this phase of a project. Mainly, we want to be certain that interviewers are familiar and comfortable with the questionnaire and fieldwork protocols, and that they understand why they must follow project guidelines. It is not always logistically possible for us to directly observe interviewer training in multiple countries, but we are working to develop innovative solutions. For example, we regularly observe training sessions via video conferencing platforms. For other projects where travel is possible, we try to arrange a centralized gathering of field supervisors from different countries so that our staff can observe the training in person.

Q: How does Pew Research Center oversee a survey project once it is in the field?

During fieldwork, we require local vendors to track multiple quality-control measures of interview and interviewer quality. For instance, a set percentage of the interviews need to be actively supervised. Second, in face-to-face surveys, a random subset of each interviewer’s interviews may be “back-checked.” This requires a colleague in the survey firm to check with a respondent – in person or by phone – and verify that the interview took place. Several survey questions are asked again to confirm answers. Depending upon data privacy laws and consent, interviews in some countries are recorded as a quality-control tool. Another tactic is to monitor the percentage of interviews conducted by any given interviewer. This helps minimize the impact any single interviewer may have on the quality of the final dataset.

We also ask local polling firms to report, at predetermined times during the field period, on the total number of interviews completed, the regional distribution of interviews, the mix of respondents by key demographic characteristics and a detailed breakdown of AAPOR disposition codes. When possible, we ask for this information to be broken down by interviewer.

We examine interim datasets at predetermined intervals. Each data point provides information on how well interviewers are following the survey plan and the instructions for randomly selecting respondents.

Q: Have you ever encountered instances where interviewers did not follow the plan? If so, what did you do?

There have been cases where fieldwork updates suggest that one or more interviewers were not following proper protocols – but this is certainly not the norm. In such cases, however, we pause that person’s fieldwork and request additional information about the interviewer’s performance, such as length of interviews and the time of and between interviews. We try to recreate an interviewer’s daily work log and determine if the interviewer is conducting interviews appropriately, given the survey’s length and (for face-to-face surveys) the country’s geography.

In a few instances, the evidence may suggest that an interviewer has fabricated responses or not randomly selected respondents as required. We require that interviewers suspected of misconduct have their work closely investigated by the local firm and, if the data collected are compromised and re-training is not an option, that interviewer will be dismissed from the project and the data they collected discarded. Those deleted cases, whenever possible, will be re-fielded by new staff. In most instances, our investigations reveal well-intentioned interviewers who encountered challenges in the field. Typically, we correct the problem by re-emphasizing the importance of our fieldwork and survey procedures.

Q: After fieldwork is completed, what steps does Pew Research Center take to ensure the quality of survey data?

The dataset is first reviewed by our research partners, firms that have been hired specifically for their considerable experience in polling in relevant countries and for their established relationships with local firms. Our research partners alert us if they suspect, or if reviews suggest, that fieldwork procedures were not followed. In that case, we work with the local firm to address the issue. Our partners are our first line of review.

Once we receive a dataset, we then implement a range of quality-control procedures. We pay special attention to the paradata for all completed interviews. We look specifically for suspicious patterns in interview length, location and time, as well as per-interviewer workload and success rate. We take a closer look at the responses recorded by a specific interviewer if we find anomalies that cannot be logically explained, and search for inconsistent responses, extreme values and duplicate records. This review process is iterative. Even if a first analysis of the paradata raises no concerns, we go back and delve more deeply into the paradata if we subsequently find curious patterns in the responses to survey questions.

Q: Do you look for duplicate records, and what does their presence indicate?

One of many ways survey data quality can be impacted is by the inclusion of duplicate records – where one set of responses is included more than once. This could be unintentional, an accident of programming or data entry. Alternately, it could be intentional fraud, a case of an employee – whether interviewer, supervisor or company executive – intentionally cutting corners to meet sample requirements, avoid hassles or lower costs.

We currently check for interviews in which there is a 100% match across all variables (including the demographic questions) and cases in which there is a 100% match across only the substantive variables. We also examine the data at a lower match threshold, taking precautions to ensure that interviews are in fact unique and meet our quality standards. We do detect duplicate records in some cases, though they normally account for 1% or less of a sample. Our standard assumption is that these are an indication of an error or low-quality interviewing, and we tend to discard them.

It is difficult to determine if a duplicate record is due to intentional fraud, even after extensive investigation. The biggest problem with identifying falsified data of any kind is that no smoking gun in the dataset usually exists. Falsified data could take any form, not just a straight (or near) duplication of answers. Interviewers looking to cut corners could invent respondents altogether, or interview friends rather than the selected respondents. Because of this, when we suspect a problem with a dataset, we analyze all available data – including paradata, substantive data, data on interviewers and contact data – to evaluate what might have gone wrong. We then work with our research partners to closely investigate what happened and how best to address the issue to successfully move the project to completion.

When it comes to all elements of data quality in our international and cross-national survey projects, our preference is to learn as much as we can to inform decision-making so future projects are better positioned to develop more collaborative relationships with local partners and produce high-quality comparative data.