October 9, 2013

Study: Polls may underestimate anti-gay sentiment and size of gay, lesbian population

Do conventional public opinion surveys under-report the proportion of gays and lesbians in the population? And do they underestimate the share of Americans who hold anti-gay views?

A team of researchers from Ohio State and Boston Universities say the answer to both questions is yes.

“We find substantial under-reporting of LGBT  [lesbian, gay, bisexual and transgender] identity and behaviors as well as underreporting of anti-gay sentiment …even under anonymous and very private conditions,” the researchers wrote in a working paper just published by the National Bureau of Economic Research.

The study was conducted by economists Katherine B. Coffman and Lucas C. Coffman of Ohio State University and Keith M. Marzilli Ericson of Boston University.

They used a novel research method that, in addition to the usual privacy and anonymity afforded by the best practice survey techniques, goes further and makes it virtually impossible to connect individual respondents with their answers to sensitive questions. They call this technique the “Veiled Report” method.

Then they compared their findings with the results obtained as part of the “Veiled Report” experiment with responses from a control group that answered questions posed in a more conventional way. Their goal was to see how social desirability bias—the tendency for people to not to reveal behaviors or attitudes that they fear may be viewed as outside the mainstream—may affect reporting on these sensitive topics.

FT_13.10.08_LGBTsurveyIn the results using the experimental technique, self-reports of non-heterosexual identity amounted to 19% of those surveyed using the Veiled Report methods – 65% higher than the 11% in the control group. The share reporting same-sex sexual experiences also grew from 17% in the control group to 27% in the Veiled Report group, they reported. (Because their experiment did not use a random sample of the adult population, the researchers do not attempt to estimate the actual size of the country’s gay and lesbian population.)

The experimental method also increased the rates of anti-gay sentiment.  For example, the share who disapproved of having an openly gay manager at work increased from 16% in the control group to 27% in the Veiled Report group.  The proportion who thought it should be legal to discriminate when hiring on the basis of sexual orientation also rose from 14% to 25%.

However, those in the Veiled Report treatment were less likely than those in the Direct Report treatment to say that a homosexual person “can change their sexual orientation if they choose to do so” (22% vs. 15%). As the authors suggest, “This indicates that participants saw it more socially desirable to report that sexual orientation is changeable, which goes in the opposite direction of a general ‘pro-LGBT’ norm.”

Here’s how the experiment worked. Researchers went online to recruit more than 2,500 study participants. These recruits were randomly divided into two groups. Both groups took an online survey on their personal computer and never disclosed their names or other information that could identify them.

Members of the two groups were asked eight questions about sexuality that people might be reluctant to answer truthfully, if at all. “Three questions deal with participants’ sexuality: whether they consider themselves heterosexual, whether they are sexually attracted to members of the same sex, and whether they have had a sexual experience with someone of the same sex,” according to the researchers . “The remaining five questions examine attitudes and opinions related to sexuality—participants are asked about public policy issues, such as legal recognition of same-sex marriage, as well as personal beliefs and feelings, such as being comfortable with LGBT individuals in the workplace.”

Depending on the group, the eight survey questions were asked in slightly different ways. The first group—the “Direct Report” or control group—started with a question that looked very different from those found in a standard public opinion poll.

What they saw on their computer screen was a list of four statements and an instruction about how to answer. Here’s an example of one of the list questions used in the study the first of eight sensitive questions:

FT_direct-report2

Respondents never revealed their answers to any of the four individual questions on the list. Instead, they mentally added up their “yes” responses and circled entered their total.

The remaining questions included the three that measured sexual orientation and the five that measured attitudes toward gays and lesbians. All eight appeared in a standard “yes” or “no” format as shown in the example and respondents in the control group were instructed to record their answer to each question below the question itself.

The second group—the “Veiled Report” group—got all list questions.  Altogether there were eight separate list questions, each of which had four more innocuous questions and one of the eight sensitive queries.

They were again instructed to add up and record their “yes” responses to each one as a total. Unlike the control group, those in the Veiled Report group were never asked to record their answer to any of the individual items.

Here’s the list question asked of those in the Veiled Report group that is parallel to the one that the control group saw:

FT_veiled-report2

The other seven sensitive questions were framed the same way: four more benign statements and a sensitive item. Each respondent in the test group saw five questions—the original four and one of the three sexual identity questions and asked to total up and enter their “yes” responses.

By comparing the average number of “yes” answers in the two groups, researchers could estimate the proportion of the respondents in the Veiled Report group that had said yes to the sensitive question measuring.

Did the experiment work?  You be the judge:

“In the Direct Report treatment, 11% of the population reports that they do not consider themselves heterosexual (8% for men, 16% for women). In the Veiled Report treatment, this increases to 19% (15% for men, 22% for women). At the same time, share of participants reporting having had a sexual experience with someone of the same sex increases from 17% in the Direct Report treatment to 27% in the Veiled Report treatment, a 59% increase,” they wrote.

No significant difference was detected between the test groups on the question asking whether the respondent was attracted to members of the opposite sex.

A cautionary note: There are many challenges in estimating the size or the composition of the LGBT population, starting with the question of whether to use a definition based solely on self-identification or whether to also include measures of sexual attraction and sexual behavior.  For a detailed look at the demographic characteristics of the LGBT population, see A Survey of LGBT Americans published earlier this year by the Pew Research Center.

The study by the Ohio State and Boston University researchers, while raising questions about traditional public opinion polls, does not attempt to draw its own conclusions about size of the LGBT population or public attitudes about it since the participants were not a random or representative sample of all adults 18 and older.  (The researchers used Amazon’s Mechanical Turk website to recruit participants.)

In fact, they said their study group was younger, more educated, more politically liberal and less likely to be Republican or to describe themselves as being at least “moderately religious” the country. They noted that some of the groups under-represented in their study are probably more likely to hold anti-gay views or be less willing to say that they are not heterosexual.

Category: Social Studies

Topics: Gay Marriage and Homosexuality, Polling

  1. Photo of Rich Morin

    is Senior Editor at the Pew Research Center’s Social & Demographic Trends Project.

Leave a Comment

Or

All comments must follow the Pew Research comment policy and will be moderated before posting.

15 Comments

  1. Jimmy Boy1 week ago

    I can confirm 100% anti-gay sentiment is VERY rampant in this country! I’m straight, fully desire to have sex with females, but since I don’t follow the unwritten “code” by wearing long/super baggy, droopy pants in summer, instead continuing to wear the same shorts that were common in the 80′s that end well above the knee, and wear Speedos to the beach, I’m considered “gay”, so any anti-gay sentiment is deemed to be thrown my way. I haven’t changed in 40 years, but since I don’t follow the majority anymore (which I think the majority is stupid, I don’t have shorts with my butt hanging out, and I’m not on the other end either with pants drooped around my knees), so somehow my orientation flipped. Ask the females I’ve had sex with if I’m gay, and they’ll give you a good laugh, as I’m as manly- probably more so, in the bedroom than any “He-Man” they’ve been with who spews those anti-gay remarks!

    Reply
  2. jorge9 months ago

    How is this supposedly new research methodology different from the list experiments that have been developed by public opinion scholars? (look up Sniderman, Kuklinski, many others). Calling it something else (veiled response) doesn’t make it anything new.

    Reply
    1. jorge9 months ago

      i find it hard to believe that they came to this method independently without having known of these scholars and the work that they pioneered in public opinion research prior to putting this current paper out there.

      Reply
  3. Greg Edmond10 months ago

    If you circle 0, then you are essentially giving away all your answers. Not very subtle.

    Reply
  4. Derrick11 months ago

    “By comparing the average number of “yes” answers in the two groups, researchers could estimate the proportion of the respondents in the Veiled Report group that had said yes to the sensitive question measuring”

    That is a horrible HORRIBLE way of statistical analysis. I’m a scientist, I do data analysis all the time but my data are hard numbers and I use equations and tables that give hard numbers. And when I estimate, I have to take into consideration my error, compounded by more analysis. They are estimating their date for the veiled group. Estimating. And with two groups of different people, that estimation has enormous error. Those percentages might as well be the same number.

    Reply
    1. Geoffrey10 months ago

      Really? I’m a professional statistician and it looks reasonable to me. I would like to see error bars on their estimates, and the info provided here isn’t enough to calculate exactly what those error bars would be. But the binary nature of the questions limits the possibilities.

      Taking the “am not heterosexual” question as an example: an almost* worst-case approach is to assume each of the four “innocuous” questions for each respondent is an independent Bernoulli variable with expectation 50%, and to treat the “am not heterosexual” question (whether asked separately or as part of a total) as an independent Bernoulli variable with expectation ~ 20%.

      Under those assumptions, with a sample size of 2500 split evenly between the two methods, the standard error for estimating difference in the “am not heterosexual” question between those two methods works out at about 4%. The difference reported (7%) would then be borderline-significant at p=0.05 with a z-score of slightly less than 2. In practice, the expectations for the “innocuous” questions are unlikely to be exactly 50%; either higher or lower reduces the variance & hence the error in estimation.

      It may seem like an odd technique, but modal bias is a big problem, and sometimes creativity is needed to get around it. For a similar technique, look up “randomised response” methods.

      *I say “almost” because I’ve assumed independence; you can make it worse if you suppose a positive correlation between answers to those questions. But that seems like an unlikely supposition.

      Reply
      1. Gabe10 months ago

        Cool that someone who specifically studied the field answered to the ¨scientist¨ in an objective more informed way, this usually doesn´t happen.

        Reply
  5. MikeKSF11 months ago

    Makes me wonder how many times I’ve been harmed for being out with my company. Those times the demotion or lack of promotion didn’t make sense being the highest rated manager. I can only hope the way this group w chosen inflated the numbers.

    Reply
  6. Concerned former journalist11 months ago

    To whom it may concern:

    As a former journalist, I am apalled at the reporting of this info. First off, Pew’s sample is so small (2500 people) (We have around 300 million in America) and 2, they are self selected. This study tells us nothing. It only gives us a picture of 2500 people who are willing to offer their opinion. Many who offer their opinion offer it to slant the perspective people take on an issue they want to see change.

    The writing of this article points out one good thing though. It does point out that people are unwilling to say their real opinion unless they can do it privately. That may say a lot of things. Perhaps it is shame over their behavior/beliefs or fear over what others may think of them or fear others will harm them.

    Either way, I encourage the paper and writer to wait for bigger samplings and more unbiased populations to write an article such as this.

    We don’t need any more yellow journalism catering to our financial supporters. We need to tell people what is really going on and not what we, or they want to hear.

    A concerned former journalist

    Reply
    1. Rich Morin11 months ago

      Thank you for your comment. But a slight correction is in order. This is not a Pew Research Center study so it is not “Pew’s sample.” The study was done by a team of academics at Ohio State University and Boston University. Pew Research did not fund, sponsor or otherwise support this work or those who did it. We at Pew Research first learned about this research at the same time the rest of the word did: From an email blast sent on Monday by the National Bureau of Economic Research, the organization that published the study.

      Reply
    2. Grant Haertter11 months ago

      First of all, the number of people in the sample is not unusual, if you look at many of the political polls done during races. But, I do believe that the population is not really representative of the general population and must be done again with a different group. This method of data gathering is fascinating, and I believe it is a much better way of gathering this data considering the reluctance of many to admit to their feelings. As this method of data gathering becomes better known, however, it may eventually be less effective, in that folks would be aware that they are indeed giving themselves away.

      Reply
    3. Justin11 months ago

      A few things. First off, a sample size of 2500 is more than sufficient. If you study any Intro to Statistics textbook it will explain how you can use the data from a small, REPRESENTATIVE sample and get a LIKELY accurate idea about a much much larger population. There is some chance the data will be skewed but the normal rigours of academic research insists that such a chance be under 5% in order to be considered valid and findings generally have to be reproducible to be accepted. Secondly, they admit there population is NOT representative that’s why they don’t claim to estimate the percentage of the population that hold these views or is GLBT. BUT of this self selecting group, individuals were RANDOMLY put into two groups meaning the differences between the two groups in aggregate should cancel out to negligible. Their findings show that for a population and therefore perhaps other populations, people will answer these kinds of questions differently under different question structuring and that these differences may more accurately depict the population’s actual feelings.

      Reply
    4. Jermey Light11 months ago

      If the statistics say there’s a less than 5% chance a result at least as extreme is due to chance (as is the case with difference in answering the “is not heterosexual” question–and it is less than 1% chance for the “discriminate against your homo. manager” question), then you can only dispute the method. The bias in the sample population should only be considered suspect for this kind of study if you think the kind of people visiting Mechanical Turk are less likely than the median individual to answer a question honestly if its result can be directly interpreted. That’s difficult to say and I don’t personally know of sociological evidence that would suggest that. So, publicizing the results of this study might actually help a larger-sample replication of this study get organized and done. It may even lead to a change in standards for assessing controversial individual dispositions like these. And as the article’s discussion points out, this may not even be the best way to assess homosexuality–but it does at least suggest something about people’s character and their natural suspicion about research.

      Reply
    5. Ben Hyle11 months ago

      A sample size of 2500 is actually a rather large sample size (much larger than samples used for political polling, and those are generally quite accurate), and if it is a truly random sample, it is in fact quite accurate. The margin of error on a sample size that large, using the whole US as the population sample, is 2%. This means that, with a single exception, all of those results are statistically significant.

      Beyond that, slanting opinions are considered in these surveys as well. After all, the magnitude of this effect has surely been tested via many, many surveys over time and meta-analyses.

      Please learn statistics before commenting about statistics.

      Reply
    6. stephan urkell11 months ago

      if you had attempted to thoroughly read the article, you would have noticed that the report was compiled by ohio state university. is that why you’re a “former journalist”? couldn’t be bothered to pay attention, just profess mock-indignation at other peoples reports?

      Reply