My take?
The overall position is conservative in the sense that it avoids any definitive statement of efficacy or harm, admitting the paucity of evidence. What disturbs is the pervasive bias in the way the data is discussed, regardless of the final, almost reluctant, conclusions. This bias bleeds out as imprecision and equivocation most frequently, but occasionally as blatant inconsistency in the standards to which the evidence is being measured (or even examined at all).
I don't have the time to trot out a lot of examples (or even read every word of the document), but here are a few passages with comments.
We see this multiculturally competent and affirmative approach as grounded in an acceptance of the following scientific facts:
- Same-sex sexual attractions, behavior, and orientations per se are normal and positive variants of human sexuality—in other words, they do not indicate either mental or developmental disorders.
What the task force here calls "scientific fact" is actually consensus opinion, and there's a big difference. Many studies have defeated the long-prevailing belief that homosexuality is or is associated with mental illness. That much I can swallow (but only on a provisional basis). But, as I've mentioned on this blog before, it's odd to me that a discussion of "human sexuality" can so thoroughly and emphatically ignore reproduction as a significant part of the equation. If one assumes, as the task force apparently does, that ejaculating and having viable sperm is all that is necessary to give the thumbs up on normal reproductive capability, then perhaps their consensus statement (which is not a fact) is defensible. However, I beg to differ.
Gay men, lesbians, and bisexual individuals form stable, committed relationships and families that are equivalent to heterosexual relationships and families in essential respects.
This is presented as another "scientific fact". What I think they meant to say was that these folks form said relationships at rates that are not statistically significantly different from heterosexual families in the essential respects that have been examined. This is not even close to the same thing. Maybe the studies satisfy non-inferiority criteria (that are subjectively assigned). Maybe there is statistical significance for the subjective answers to survey questions, but many "essential respects" are not so easily measured, and failing to show a difference is not the same as showing equivalence. They don't bother footnoting this statement, so there's probably some great quality data... but moving from great quality data to proclamations of unequivocal "fact" is a move I highly doubt I would support after reviewing the relevant literature.
...few studies on SOCE produced over the past 50 years of research rise to current scientific standards for demonstrating the efficacy of psychological interventions...
Few studies of anything produced more than a few years ago rise to current scientific standards. They still can inform, even if they can't prove. Because these studies weren't conducted as randomized controlled clinical trials, they can't show us what we'd like to know, but I don't think the task force is correct with: "there is little in the way of credible evidence that could clarify whether SOCE does or does not work in changing same-sex sexual attractions." The evidence that is presented is what it is. Just because it's not the kind of rigorous science that would demonstrate causality doesn't mean that it's not "credible"! If the researchers were found to have manipulated data there would be a credibility problem, but as the data is, it just gives us very limited evidence, albeit legitimate.
Interestingly, a footnote briefly mentions a Nicolosi study that was not included in the task force's consideration because it was published after the review period and "appeared" to be a reworking of an earlier study. I haven't read Nicolosi's 2008 study, but if it provided any new information that met "current standards" in a way that nothing else does, perhaps they could have gone ahead and extended the review period since the limited data is the whole point. And if it was a reworking of an earlier study, that's even more reason to suspect that it was specifically reworked to assuage criticisms of methodology or presentation. In other words, the task force laments having no "credible" data but can't be bothered to look at the most recent data, even when it was published a year in advance of this report.
White men continue to dominate recent study samples. Thus, the research findings from early and recent studies may have limited applicability to non-Whites, youth, or women.
This is certainly true. So is this: old people continue to dominate the cancer literature, so research findings may have limited applicability to young people. The trick is most people with cancer are old. Just like most people who seek out SOCE are white males. So it's okay to go ahead and accept that there's value in the data even if it's not completely generalizable. The population that has been studied happens to be the vast majority of those for whom this research will make any difference.
In general, the results from studies indicate that while some people who undergo SOCE do engage in other-sex sexual behavior afterward, the balance of the evidence suggests that SOCE is unlikely to increase other-sex sexual behavior.
Again, this is true. So is this: chemotherapy for breast cancer patients will not give any benefit to a majority of patients but will give toxicity to all of them. The trick is, I don't care what happens to the "majority," I want to know quantitatively whether there was a difference in the rate of other-sex behavior from the therapy (if not causally demonstrated, at least temporally). And it sounds like there was a quantitative difference, even a significant one. But I wouldn't know from this report as they just go ahead and stick with vague dismissals like the quote above.
Two participants reported experiencing severe depression, and 4 others experienced milder depression during treatment. No other experimental studies reported on iatrogenic effects.
Woah. Suddenly the fact that participants are experiencing things in association with treatment can be automatically causally linked. Well, hey, we moved on to harms, so the rules of scientific rigor have all changed. These cases of depression are "iatrogenic". Umm... how do you know? Although the task force does go on to admit that there is no causal attribution for harms or benefits, they go ahead and refer to "some evidence" of harm repeatedly through the report while adamantly holding that there is "no credible evidence" of benefit.
We recommend that APA take a leadership role in opposing the distortion and selective use of scientific data about homosexuality by individuals and organizations and in supporting the dissemination of accurate scientific and professional information about sexual orientation in order to counteract bias.
Ah. Here's something I totally agree with. I just wish they'd followed their own advice. I couldn't find it just now rescanning through, but there's a great gem in there where the task force refers to itself as an example of authoritative and reliable source of scientific information. Ha. Or... maybe individuals can actually go ahead and critically examine things for themselves since science isn't a religion and appeals to authority are both unnecessary and fallacious. A scientist ought to know that.