In key ways, writing surveys to assess foreign public opinion parallels how Pew Research Center approaches questionnaire design for U.S. projects. In both cases, center staff carefully consider question wording, when to ask open- vs. close-ended questions, question order and measuring change over time, all of which can be read about here.
That said, designing questions for domestic and cross-national studies does differ in important ways. Cross-national questionnaires have to be developed with an eye toward comparability across dozens of languages and cultures. For example, the 2014 Global Attitudes survey instrument was translated into more than 70 languages, while the 2011-12 survey of the world’s Muslims was fielded in more than 80 languages.
Translation is a multi-step process. For questions asked on earlier surveys, the center relies on translations used in previous questionnaires in order to maintain comparability of survey data over time. For new questions, Pew Research staff begin by submitting the questions to professional linguists. The linguists evaluate each question for ease of translation and make recommendations to guide proper translation. New questions, along with the linguists’ recommendations, are then submitted to local research organizations, which translate the items into the appropriate language(s). Once translations are complete, they are again reviewed by professional linguists, who provide feedback to the translators. Pew Research Center staff are consulted regarding any serious debates about translation, and the center issues final approval of the translated survey instrument prior to fieldwork.
Throughout the translation process, Pew Research Center strives for questions that are comparable at the level of meaning, not simply literal translations of the original English versions. An example is the construction frequently used in U.S. surveys: “Which of these two statements is closer to your view, even if neither is exactly right.” While easily comprehended in American English, this question actually includes two idiomatic expressions – “closer” and “right” – and presents challenges for translators trying to replicate the meaning in another language. A better guide for translators, arrived at with help from professional linguists, is to rephrase the question in English to read as, “Which of these two statements is most similar to your point of view, even if it does not precisely match your opinion.”
In addition to being shaped by the translation process, the final cross-national survey instrument that is used in the field is influenced by cultural and political sensitivities. These are more than a matter of politeness. Especially in countries where surveys are administered by interviewers going door-to-door, asking about taboo subjects can expose interviewers and entire research firms to legal or even physical harm. A case in point is Afghanistan, where pretests for the world’s Muslims study revealed that questions about Christians or Christianity were perceived as a form of proselytizing. Pew Research Center removed the questions in Afghanistan after it became clear that interviewers felt their safety could be at risk from respondents or local authorities alarmed by such questions. Pew Research also has omitted questions due to political sensitivities. In Vietnam, for instance, the center has followed the advice of local research organizations to not ask directly about the national government or leadership, as these subjects are seen as inviting undue scrutiny from authorities who could detain or arrest interviewers.
In some countries, the degree of sensitivity associated with a given subject makes it impractical to even field a survey. Saudi Arabia, India and China, for example, were each omitted from the world’s Muslims study due to the severe constraints and potential risks associated with fielding a survey about Muslim identity, beliefs and practices. In each case, it was decided that the quality of survey data would have been undermined if respondents felt uncomfortable or not free to express their opinions, either because of pressure from the authorities or for other reasons. Based on a careful assessment of conditions under which face-to-face surveys could be conducted in these countries, and in consultation with country experts and local polling organizations, we reluctantly decided that we could not assure the safety of interviewers and meet our standards for data quality in these countries.