There are many ways of lying under the cover of statistics. One that I’ve not previously emphasized is to imply a correlation where none exists; for example, “the declining incidence in the control group in Rakai — which, although not statistically significant, reduces the difference between the groups” [emphasis added; Gray et al., “Male circumcision for HIV prevention in men in Rakai, Uganda: a randomised trial”, Lancet, 369 (2007) 657-66].
The whole point of this type of statistical analysis is to determine whether or not an association plausibly exists. If there is no statistically significant association, then no association has been found.
The proper statement would be significantly different:
“The declining incidence apparently had nothing to do with the difference between groups”.
Here’s another example: “The odds of being HIV-positive were nonsignificantly lower among MSM who were circumcised than uncircumcised (odds ratio, 0.86; 95% confidence interval, 0.65-1.13; number of independent effect sizes [k]=15)” (emphasis added; Millett et al., “Circumcision status and risk of HIV and sexually transmitted infections among men who have sex with men”, JAMA, 300 [2008] 1674-84).
The enumeration of odds ratio, confidence interval, and effect sizes conveys a sense of technical correctness which, whether intended or not, lends rhetorical weight to the assertion of “lower” when, in actual technical fact, no significance has been established at the 95% probability level.
It is unwarranted, irresponsible, pseudo-scientific to say “nonsignificantly lower”, because that suggests that it is actually lower, though perhaps for purely technical statistical reasons not statistically significantly so.
Again: If the statistics delivers a verdict of “not significant”, then nothing has been established, not lower and not higher. Once more the proper statement would be significantly different:
“No association was found between circumcision and ‘HIV’ status”.
The silver lining in these instances, such as it is, is that I have stimulated many belly laughs — though also some very puzzled expressions — by inviting statistically literate friends to explain to me what “nonsignificantly lower” means.
The dark clouds, however, are that these people — who work at the Centers for Disease Control and Prevention, no less — are capable of writing such a phrase. They are either statistically illiterate or seeking deliberately to deceive. I don’t know which of those two would be the more depressing.
It is also worth noting and regretting that these statistical illiteracies passed the editorial- and peer-review processes of the Lancet and the Journal of the American Medical Association. “Peer review” is no better than the reviewers and the editors make it.
*********************
Oxymoronic jargon like “nonsignificantly lower” surely comes about because of an unshakeable belief that there is — must be — a lowering, in the face of data that do not support the belief. There exists a persistent unwillingness among HIV/AIDS mainstreamers to accept facts that contradict their belief — they suffer cognitive dissonance, as I’ve had occasion to remark all too often [Cognitive dissonance: a human condition, 26 December 2008; The debilitating distraction of “HIV”, 21 December 2008; State of HIV/AIDS denial: carcinogenic HAART, 21 November 2008; True Believers of HIV/AIDS: Why do they believe despite the evidence?, 30 October 2008; “SMART” Study begets more cognitive dissonance, 11 June 2008; Death, antiretroviral drugs, and cognitive dissonance, 9 May 2008; HIV/AIDS illustrates cognitive dissonance, 29 April 2008].
Of course, one might try to argue that “95%” is just an arbitrary criterion: one could choose 85%, or 70%, or any other value; or one might say that “lower” is simply expressing the raw numbers in words without attempting statistical analysis to attach a particular probability. But that would mean jettisoning any pretence of being scientific by using statistics to guide judgment as to whether an effect is plausibly real or not. If one offers statistical details then one should also abide by what the statistical analysis concludes and not try to fudge it.
*******************
Another abuse of statistical analysis that also may not be obvious until made explicit:
Upon finding no correlation, divide the data into sub-groups in the hopes that one or other might show an apparently significant effect. This is statistically improper, a prelude to lying with statistics, because if you look at enough sub-groups the probability becomes appreciable that there will be found one or a few that appear to have a statistically significant association. Recall that if one uses a criterion as weak as “95% probability”, one apparently but not actually significant association will show up on average at least once in every twenty times — more often if the looked-for association is inherently unlikely [R. A. J. Matthews, “Significance levels for the assessment of anomalous phenomena”, Journal of Scientific Exploration 13 (1999) 1-7].
In the present instance, there was no association in the sub-group of insertive anal sex, nor between circumcision and sexually transmitted infections, two sub-groups where an association would not be implausible. On the other hand, highly implausible apparent associations were noted in studies conducted before the introduction of HAART, and between “HIV”-preventive circumcision and study quality. It is not easy to conceive why an association between circumcision and “HIV” acquisition would have anything at all to do with what treatment is provided people who have AIDS, long after acquiring “HIV”; and “study quality” is a highly subjective variable.
No. The Millett article leads to only one legitimate conclusion: No association found between circumcision and “HIV” status among MSM.
*******************
The problem for HIV/AIDS dogmatists is that they have failed to find any way of preventing people from becoming “HIV-positive”. The mistaken view that it has to do with infection and with sex keeps them searching for data to support that view, rather as rats or guinea pigs are doomed to try eternally to scale the turning wheels in their cages. Study after study gives the same result, no association. At the 4th International AIDS Society Conference, Sydney 2007:
Guanira et al., “How willing are gay men to ‘cut off’ the epidemic? Circumcision among MSM in the Andean region”)
— “No association between circumcision and HIV infection when all the sample is included. A trend to a significant protective effect is seen when only ‘insertive’ are analyzed.”
Note again the unwarranted, illegitimate attempt to assert something despite the lack of evidence: a “trend” toward a significant effect, when the statistical analysis simply says “nothing”, no correlation.
Then there was Templeton et al., “Circumcision status and risk of HIV seroconversion in the HIM cohort of homosexual men in Sydney”)
— “Circumcision status was not associated with HIV seroconversion . . . . However, further research in populations where there is more separation into exclusively receptive or insertive sexual roles by homosexually active men is warranted” [emphasis added].
More research is always warranted, of course, that’s what pays the researchers’ bills [Inventing more epidemics; the Research Trough; and “peer review”, 2 August 2009; The Research Trough — where lack of progress brings more grants, 10 September 2008].