Attention conservation notice: 1400 words about a topic which, if you were really seriously interested in it, you would already have read the research I’m summarizing.
Nearly since the modern conception of sexual orientation was invented, there have been serious questions about how the various categories should be defined (if they even can), and how many people could be so described. The definitional question has become particularly important thanks to our society’s fundamental essentialism: people ought to be free to express their sexuality in various ways, say the law and the media, because they are innate and unchanging. Queer advocacy groups have pitched their message to play to this essentialism, and to give them credit, this strategy has won some major victories — but at what price? I have long worried about winning the fight for equal rights in a way that entrenches the wrong principles, and in particular, reinforces the social erasure of bisexuality.
To understand how much of a problem this is, it is necessary to get a handle on the size of the population, and non-majority orientations have historically been extremely difficult to get believable numbers on through traditional survey research methods. For many years, people in the gay community believed that they were about ten percent of the population, and this is still a commonly-held belief. (Bisexuality, of course, was relegated to a small fraction of that — perhaps one percent of the overall population — when it wasn’t ignored entirely.) This view is hard to square with modern scholarship about historical sexual behavior, in both our Western societies and in tribal cultures from Polynesia to the Americas, particularly if one is committed to the essentialist view. One can of course argue that, sexual orientation being a modern concept, it is anachronistic to categorize historical behavior according to our current conception, but at the same time, one must assume that the ancient Greeks and Romans were subject to pretty much the same innate desires and drives as we moderns are, even if they conceptualized the resulting behavior under a different standard.
There have been a few recent attempts to better characterize the prevalence of non-heterosexual orientation, and non-heterosexual behavior more generally. A 2011 report from the UCLA Law School’s Williams Institute summarizes recent research findings using traditional survey methods and instruments, and concludes that only 3.5% of adults in the United States are “LGB” (and 0.3% have some measure of trans identity), with slightly more than half of the “LGB” population (and significantly more than half of the female “LGB” population) identifying as bisexual. However, another 4.7% of the population did not report an “LGBT” identity but did report same-sex sexual behavior, and a further 2.8% report same-sex attraction without reporting such behavior. (I am playing a bit fast and loose with the statistics here, but this is a close enough approximation to the report’s findings.)
These numbers are a summary of multiple surveys taken by different investigators with different survey protocols, and it’s hard to know what to make of them. My intuition is that they seem quite low, but they do fairly represent a consensus of the surveys the Williams Institute report used as sources. But one of the major critiques of the essentialist view is that it’s impossible to identify any sort of “innate sexual orientation” so long as non-majority preferences are still stigmatized; could it be that traditional survey methodologies, even with privacy-enhancing protocols (such as administration by computer in an otherwise private room), are insufficiently private for respondents to report truthfully? It turns out that there are survey protocols for measuring this — originally developed for investigating the prevalence domestic violence, among other social ills — and a working paper from the National Bureau of Economic Research released late in 2013 demonstrates that the answer is “yes”.
The privacy-preserving survey protocol is called the “Item Count Technique” or ICT, and it works like this. First, a sample is drawn and randomly divided in half, into “control” and “treatment” subsamples. Each half of the sample is given a list of yes-or-no questions, and participants are asked to answer each question privately, and report only the total number of “yes” answers. Unbeknownst to the participants, those in the “treatment” subsample get one additional question — which is the question the researchers are actually interested in; it’s known as the “veiled” question (because the researchers get information about the answer to the question without learning the specific answer of any individual participant). The other questions included in the item count are carefully chosen so that some of them are negatively correlated; this ensures that “0” and “N+1″ responses are unlikely, because those responses would positively identify the respondent as having answered the veiled question in a particular way. By comparing the tallies over the two subsamples, the researchers can determine the “average” answer to the veiled question, and thus its prevalence in the population. (This means that cross-tabs are still possible where the veiled question is the dependent variable, but not the independent variable.)
The NBER researchers went a step further, and asked the “control” participants to answer the study question point blank, after submitting demographic data and their item count for the other questions. This allowed them to directly compare the response rates for the traditional and veiled protocols, and they did in fact find that significantly more survey participants answered “yes” using the veiled protocol. This research was done using a (non-probability) Internet sample, so it is difficult to make conclusions about the population as a whole, but because the participants were assigned randomly between the two survey protocols, it is possible to make some conclusions about the willingness of the people in their sample to truthfully answer a question about non-majority sexuality.
The differences between the NBER results and the Williams Institute findings are pretty striking. It is important to recognize that the population the NBER researchers surveyed, recruited through Mechanical Turk, is not representative of the population as a whole, and this research would need to be repeated with a true probability sample in order to make such a generalization. (The authors note “Our population has broad coverage of demographic characteristics, but is not representative of the U.S. as a whole (e.g., 18-30 year-old liberals are overrepresented in our sample).”) The NBER survey actually asked eight different “sensitive” questions, some of which were about sexuality, and some of which were about anti-gay prejudice, and the order of those questions was randomized to reduce priming effects. They found that there was some evidence that question ordering influenced the answers. But with those caveats in mind, let’s take a look at the actual findings.
The first question, and the one most directly relevant to my topic in this post, asked the participants whether they considered themselves heterosexual. As with all the other questions, this was given as a yes/no question, and no description was offered of what the “no” alternative might entail. For the control group, who were asked the question directly, 11% answered in the negative (“I am not heterosexual”), 8% of the men and 16% of the women. For the ICT group, however, that increased to 19% (15% men, 22% women). Another question asked respondents whether they had ever had a same-sex sexual experience, and here too, the veiled technique significantly increased response rates, from 17% (12% men, 24% women) all the way to 27% (17% men, 43% women). Both of these increases in reporting rate are significant at p < 0.05; for the first question, the increase is also significant at p < 0.01. (For all questions, n = 2516 overall, with 1270 in the “direct report” group.)
On a third question (actually numbered 2 in the paper), the NBER researchers found no statistically significant difference in reporting same-sex attraction, as opposed to experience or orientation. Other questions included in the experiment looked at sentiment: support for marriage equality, equal adoption rights, anti-discrimination laws, attitude towards a hypothetical LGB supervisor, and whether the respondent thought orientation was fixed or changeable. In general, the ICT increased reporting of what the researchers characterized as stigmatized anti-LGBT attitudes.
The report goes into more detail for the statistics geeks than is practical to repeat here (if you’re one of those people, see the link below). One of the appendices gives the demographic breakdown of the respondents, which makes it clear just how young Turkers are (the median age was 26, versus 37 for the US population as a whole), and how important it is that a similar study be performed on a more representative population.
References
- Gary J. Gates, “How many people are lesbian, gay, bisexual, and transgender?” (Williams Institute, April 2011)
- Katherine B. Coffman, Lucas C. Coffman, and Keith M. Marzilli Ericson, “The size of the LGBT population and the magnitude of anti-gay sentiment are substantially underestimated“, NBER Working Paper 19508 (National Bureau of Economic Research, October 2013)