The Pew Research Center often receives questions from visitors to our site and users of our studies about our findings and how the research behind them is carried out. In this feature, senior research staff answers questions relating to the areas covered by our seven projects ranging from polling techniques and findings, to media, technology, religious, demographic and global attitudes trends. We can’t promise to respond to all the questions that we receive from you, our readers, but we will try to provide answers to the most frequently received inquiries as well as to those that raise issues of particular interest.
If you have a question related to our work, please send it to firstname.lastname@example.org.
- How do those who don’t search for health information online differ from those who do?
- Isn’t it difficult to poll in some countries? And how confident are you in the poll findings?
- See other recent Ask the Expert questions
Q: You’ve reported on how many Americans have turned to the Internet for health care information. What about those who have not taken part in this trend? How do they differ from those who go online for this information?
A: Education significantly affects someone’s likelihood to have internet access, which of course influences a person’s likelihood to search for health information online. For example, when looking at adults who have less than a high school education, just 42% go online and, of those, 62% say they gather health information online. That means three-quarters of U.S. adults who have less than a high school education say they do not get health information online. By comparison, college graduates are nearly all online (94%) and 89% of that group gathers health information online. Therefore, only about one-in-six U.S. adults with a college degree say they do not get health information online.
In 2002, we asked a basic screening question to see if respondents ever use the internet to look for health information or medical advice. Of those who answered in the negative, nearly half of internet (47%) users said the major reason they did not search for health information online was that there were not any health or medical issues of immediate concern to them. Almost the same number (46%) said they were satisfied with the health and medical information they got elsewhere. A smaller number (12%) said that much of the information on the Internet could not be trusted and 9% said they would not know where to start looking for health information online.
Susannah Fox, associate director, digital strategy, Pew Internet & American Life Project
Q. I’ve never been called for a poll. What are the odds I’d be selected for a survey? And, if my home is called, how do you determine who you talk to?
You have roughly the same chance of being called as anyone else living in the United States who has a telephone. This chance, however, is only about 1 in 154,000 for a typical survey by the Pew Research Center for the People & the Press. To obtain that rough estimate, we divide the current adult population of the U.S. (about 235 million) by the typical sample size of our polls (usually around 1,500 people). Telephone numbers for Pew Research Center polls are generated through a process that attempts to give every household in the population a known chance of being included. In practice, that’s difficult to achieve. For one thing, we avoid calling large blocks of telephone numbers that have not been assigned to any households. Because the process of identifying such blocks of numbers is imprecise, some households may get excluded from the sample. Similarly, you will have a greater chance of being called if you have both a cell phone and a landline phone (since you have two different ways of landing in our sample). And if you don’t have a telephone at all (about 2% of households), then of course you have no chance of being included in our telephone surveys.
Once numbers are selected through random digit dialing, the process of selecting respondents within a household is different for landline and cell phone numbers. When interviewers reach someone on a landline phone, they randomly ask half the sample if they can speak with “the youngest male, 18 years of age or older, who is now at home” and the other half of the sample to speak with “the youngest female, 18 years of age or older, who is now at home.” If there is no eligible person of the requested gender at home, interviewers ask to speak with the youngest adult of the opposite gender, who is now at home. This method of selecting respondents within each household helps us obtain more interviews with young people, who are often more difficult to reach than older people because of their lifestyles. One implication of selecting a respondent within a household is that the probability of being interviewed for the survey depends on how many adults live in the household and are at home at the time of the call.
Unlike a landline phone, a cell phone is assumed to be a personal device. As a result, for cell phone calls we do not attempt to select a respondent from all of the adults in the household. Instead, we ask if the person who answers the cell phone is 18 years of age or older. If they are 18 or older, we attempt to interview them.
Once we’ve completed a survey, we adjust the data to correct for the fact that some individuals (e.g., those with both a cell phone and a landline) have a greater chance of being included than others. For more on how that’s done, see the discussion of weighting in our detailed methodology statement.
Scott Keeter, Director of Survey Research, Pew Research Center
President, American Association for Public Opinion Research, 2011-2012
Q. Each year, the Pew Research Center’s Global Attitudes Project conducts public opinion polls around the world. Isn’t it difficult to poll in some countries? And how confident are you in the poll findings?
There is no question that the process of fielding surveys in foreign countries can be challenging and even difficult, especially as the majority of our polls are conducted through face-to-face interviews. That said, the rigorous quality standards we apply to our Pew Global Attitudes survey work abroad are the same as those we apply here in the U.S. Namely, we strive to ensure that our surveys accurately represent the populations of individual countries, and that we ask questions that deliver valid information about how people view important events and issues of the day.
The task of ensuring our surveys represent the adult-age population a foreign country requires employing rigorous sampling methods. The first step is to work closely with our principal research partners to identify local polling firms with knowledgeable staff and proven track-records in designing large survey samples. The good news is that political and economic changes over the past two decades have generally increased the demand for social and market research and have made it easier for us to find local, capable research firms.
The not-so-good news is that, compared with the U.S., the penetration of landline phones in many countries has not gotten to a point at which we have felt comfortable fielding national surveys by phone. And while mobile phones are quickly becoming an integral part of life for many people around the globe, reliable information about mobile users and how to integrate them into national samples remains limited. Thus, with the exception of a few countries such as Britain, France, Germany and Japan, we have erred on the side of reliability and have administered our surveys in person, rather than by phone. This means that unlike national surveys in the U.S., which can be completed in just days, most Global Attitudes surveys take two or more weeks to complete.
Due to their use of proven sampling techniques, the local vendors we work with can achieve nationally representative surveys by conducting face-to-face surveys with about 1,000 respondents. The key is ensuring that these respondents are selected in a random, unbiased fashion – and that all adult members of a country’s population are eligible for inclusion in a survey. More often than not, we meet these requirements by using multi-stage, cluster samples.
What this means is that rather than randomly selecting individuals directly (by phone, for example), we first randomly select clusters of individuals – beginning with relatively large territorial units, akin to counties in the U.S. Once these primary clusters are selected, we randomly select smaller territorial units, until we work our way down to city blocks or villages. At this stage, interviewers either visit addresses selected randomly from a list, or they follow a so-called “random walk” in which they visit every third or fourth residence along a set route. At each residence, interviewers randomly select a respondent by using a Kish grid (a detailed list of all household members) or by selecting the adult who has had the most recent birthday.
As mentioned above, in most countries surveyed by Pew Global Attitudes, multi-stage, cluster samples are used to field nationally representative surveys. However, in a few instances we are unable to conduct full, national surveys face-to-face. Sometimes, the limiting factors are cost and time. In China, for example, it would take many weeks to collect face-to-face interviews from across the country, and it would be prohibitively expensive to transport trained interviewers long distances. Therefore, for now our surveys represent only 57% of the Chinese population (those mostly in urban areas). We hope to expand our coverage in the future.
In other instances, concern for the safety of interviewers keeps us from fielding truly national surveys. In Pakistan, for example, our surveys exclude 15% of the population due to concerns about sending interviewers into frequently violent border regions. As in China, our survey respondent base is predominantly urban because of this. But regardless of the scope, our surveys in China and Pakistan are held to the same methodological standards as our surveys conducted elsewhere.
One of the quality checks we perform for all Pew Global Attitudes surveys is to compare the demographic characteristics of the people included in our surveys with census or other official data that describe the gender, age or educational makeup of the population. In cases where our data departs more than a few percentage points from official statistics, we may decide to adjust our data through the mathematical procedure of weighting. Weighting is a common practice in survey research. Properly applied, it can improve our ability to accurately report how prevalent or variable attitudes are in a given society.
For all our surveys, of course, we calculate country-specific margins of sampling error, based on the number of people surveyed and whether the survey was based on random dialing by phone or a multi-stage, cluster sample. These margins of error are integral to our ability to identify attitudinal shifts or statistically-significant differences. Over the years, our key trends have proven highly reliable; they have moved in directions that track well with political and economic developments or have remained relatively stable in the absence of major events or changes at the local, regional or global level. This is another reason we stand behind the accuracy of our data.
Beyond the technical aspects of sample design, our confidence in Pew Global Attitudes’ findings stems from the time and effort we put into the design and translation of our questionnaires. In a given year, our survey questions may be translated into twenty or more languages. Coordinating so many translations can be daunting. Fortunately, since the Pew Global Attitudes Project began in 2002, we have assiduously archived translated versions of the annual questionnaire. As we develop each new round of the survey, we are able to draw on these tried-and-tested translations when we repeat specific items or trends.
But past translations are of little help when it comes to new questions or new countries. In such instances, we rely on local polling firms to first translate our questions into the appropriate local language or languages. As a standard operating procedure, we then have the translation re-translated into English by a bilingual person who has not seen the original English-language questionnaire. This is called back-translation, and it is highly useful for identifying questions that have been poorly or incorrectly translated. For complex questions or especially challenging languages, we often take the additional step of consulting with linguistic experts to perfect translation of new items.
Regardless of language, the goal of the translation process is always the same: to ensure that we ask questions that reflect our intended meaning, with results that can be compared cross-nationally.
A final reason for confidence in our findings is the careful training of interviewers and close supervision of fieldwork. In each country, prior to fieldwork, local research firms train their interviewers to properly administer the questionnaire. This includes briefing interviewers on the overall purpose of the survey, the intent of specific questions and how to manage both asking questions and recording answers. In the case of both phone and face-to-face surveys, interviewers participate in mock interviews in order to gain familiarity with the questionnaire. These training sessions can highlight ways to improve the administration of the survey so that questions are clearly communicated and answers correctly recorded.
Once fieldwork begins, interviews are regularly monitored by supervisors. For phone surveys, this typically involves a supervisor listening in to a live interview or calling back a respondent to verify that an interview was completed with an eligible individual. Similarly, with respect to face-to-face surveys, supervisors will travel with interview teams to urban neighborhoods or rural villages to make certain that interviewers visit residences randomly drawn from a list or selected randomly from a pre-determined route. Supervisors will also later visit a certain percentage of residences to confirm that eligible individuals have been interviewed.
Quality checks during the survey administration process help ensure that Pew Global Attitudes surveys reach their target populations and ask the intended questions. By using careful field supervision, appropriate sample design and thorough translations, we can be sure that our surveys accurately represent public opinion around the world.
Jim Bell is Director of International Survey Research for the Pew Research Center.
You can find a list of the surveys by the project here.
For more “Ask the Expert” questions, click here.