The arbitrary nature of “HIV” tests was admitted already in the patent filed by Gallo, where it was stated that “absorbance readings greater than three times the average of four normal negative control readings were taken as positive”. No reason was offered for choosing these particular numbers, nor what might constitute “normal negative control” readings or why control readings should have any absorbance in the first place. It was reported, however, that under these criteria, 43 of 49 AIDS patients tested positive as did 11 of 14 pre-AIDS, 3 of 5 drug users, 6 of 17 gay men, and 4 of 15 “others”; only 1 of 186 “normal controls” and 1 of 164 “normal subjects” tested positive. Any critically minded person might take these numbers as grounds to question whether the patented test can be relied on to identify AIDS-infected people with any certainty, since the specificity is apparently less than 90% and there are also at least some false positives .
The review chapter cited in an earlier post (Weiss & Cowan, “Laboratory detection of human retroviral infection”) also explains why the sensitivity and specificity of any given test is a guess more than anything else: “In the absence of gold standards, the true sensitivity and specificity for the detection of HIV antibodies remain somewhat imprecise” (p. 150). A non-partisan observer might well translate the impertinent euphemism “somewhat imprecise” into “unknown”.
It seems that proper “HIV” testing would employ “variations in . . . approaches as they pertain to testing patients, persons believed at risk, and screening of blood donors”. Not only that: any given test might not be the most appropriate “in light of the many differences in the serological, immunological, clinical and therapeutic correlates of different HIV strains . . . biologicals derived from specific isolates should specify the strain of origin (e.g. HTLV-IIIB), and tests based upon HIV reagents should reference the precise sources” (p. 147); “certain strains of HIV [are] not well detected by some standard assays (e.g. HIV Group O)” (p. 150). “Each individual assay has its own associated special characteristics and is not interchangeable with other assays, even within a given class of test” (p. 148). Thus the FDA list indicates that approvals of the various test kits are specific for different purposes, for example, “screening and supplemental tests . . . for use with whole blood, serum, plasma, dried blood spots, urine, and/or oral fluid” (p. 150). For each type of test there has to be chosen a separate “cut-off” value, the degree of opacity above which a result would be classed as “positive” (p. 151).
There is a troubling Alice-in-Wonderland aspect to these tests. For blood screening, one wants high sensitivity which decreases the specificity, whereas for diagnosing individuals one wants high specificity which means lower sensitivity. Now, because there is no gold standard for these tests, the specificities and sensitivities have to be inferred indirectly. In practice, they are stated by the manufacturers of the test kits, not by any independent research (p. 151), and the FDA then “assesses” commercial test kits by relying “upon current blood donors to determine how a given product performs . . . in both high and low risk populations and individuals known to be infected with HIV” (p.150). The “operative assumption that ‘all blood donors are true negatives’ is false” because “some current donors are HIV-infected”. “This would lead to a paradoxical situation that perfect specificity (no false positives) would be attained only with a test that detected absolutely no [emphasis in original] positives among current blood donors. A test that was never positive would have perfect specificity (but zero sensitivity). This paradox might tempt manufacturers to adjust assays to take advantage of this specificity loophole, leading to undesirable results. Furthermore, the inclusion of true positives that get tabulated as false positives would wrongly underestimate the assay characteristics. Thus, all repeat reactives that come up in the low prevalence population (assumed zero prevalence) are tested further in current clinical trials, and, if shown to be infected by other methodologies, are permitted to be excluded from the specificity calculations. In essence, the control (very low prevalence) group is redefined post-facto to avert the preceding paradox. If the reclassification as true positive were erroneous — as could occur if there was a condition leading to false reactivity on the screening assay which also led to falsely positive confirmatory assays(s) — there would be a serious problem and circulatory in definition. For this reason, the reclassification needs to be done using methodologies as disparate as possible” (p. 161).
Let’s paraphrase this. The claimed sensitivity and specificity of “HIV” tests is “assessed” by how they perform on high-prevalence and low-prevalence populations — which have been “found” to be high or low by application of earlier (presumably less satisfactory) versions of some sort of “tests”. But since there is no population verifiably zero prevalence, blood donors are used as a proxy. However, this proxy is invalid, so some other tests need to be carried out to determine the “true” prevalence in this proxy control group. Since there’s no gold standard, though, this procedure will yield invalid answers if there are conditions other than “HIV infection” that can produce a “positive” on any of the “HIV” tests. To avoid this “serious problem and circulatory . . . definition”, “disparate” methodologies should be used.
In reality, though, all methodologies are suspect, since there’s no gold standard. They attempt to detect either antibodies presumed to be “HIV”-specific (despite actual evidence that they are not) or bits of nucleic acid again only presumed to be “HIV”-specific (despite actual evidence that they are not, for example, they share some characteristics with certain human endogenous retroviruses (e.g. Yang et al., PNAS 96 [23] [1999] 13404-8)).
Above all, though, the vitiating circumstance of conditions that can mimic a “true positive” “HIV” test is quite likely to be present in any proxy control group, given the great number of such known conditions (p. 152, Table 8.2):
“False Positive Results
. . . Recognized Problems include:
a. HLA antibodies (. . . poses a diagnostic question for multiparous women and others with repeated HLA exposures). [Here’s one reason why pregnant women, and women who have borne children, test “HIV-positive” at so high a rate — they may evidently be false positives; how often are the women made aware of that?]
b. Repetitive freeze/thaws (e.g. some stored specimen). [What does this admission mean for claims of having found “HIV” in decades-old samples and thereby tracing the origin of the “epidemic”?]
c. Other retroviruses. [Indeed, the last section of this review describes cross-reactions of “HIV” with HTLV-I and -II. It has also been reported that products of some pro-viral sequences of human endogenous retroviruses (HERVs) can cross-react with products of “HIV” (see Yang et al. cited above). Perhaps Duesberg’s suggestion that “HIV” is a passenger virus is compatible with the notion that “HIV” is an HERV.]
d. Heating of specimen.
e. Autoantibodies . . .
f. Hypergammaglobulinemia, “sticky sera” (e.g. specimens from Africa). [Is this why “HIV” is so endemic/epidemic in Africa?]
g. Cross-reactive proteins (e.g. 25-30 Kd) . . .
h. Non-specific IgM binding (e.g. after vaccination; possibly also related to acute or inflammatory phase responses . . . )”. [In other words, just about any inflammation or vaccination can result in a positive “HIV” test. Indeed, this has been separately and specifically reported for flu vaccinations and anti-tetanus shots.]
Recapitulating:
To validate “HIV” tests in acknowledged absence of a gold standard, procedures are followed that are invalid if those tests pick up conditions other than “HIV infection”. Many such conditions are known. It is therefore highly unlikely that “HIV” tests could be properly validated.
Nevertheless, these “tests” are used to label individuals “HIV infected” whenever the attending clinician has a strong suspicion that this may be the case, for the most important part of “testing” comes before the actual test, it’s the “pre-test probability”. As previously noted, “HIV” tests are self-fulfilling prophecies.
Clearly, submitting to an “HIV” test is akin to buying a ticket in a lottery whose “prize” is stress, ill health, and iatrogenic harm. Little wonder that Weiss & Cowan emphasize that, because of the uncertainties and implications of “HIV” tests, “a written informed consent procedure in advance of testing was initially recommended in 1985 . . . and is now used by many to document pre-test discussions” (p. 148). One wonders just how many those “many” really are; I’ve never seen reference to it by people who were misdiagnosed and iatrogenically harmed by antiretroviral drugs, Audrey Serrano, say.
**********************
Weiss and Cowan are to be commended for their detailed, documented, evidently honest review of the state of the art in “HIV” testing. From the viewpoint of academic research, this is an exemplary review. But HIV/AIDS is not an academic matter and it’s not just a research enterprise. Millions of people have been pronounced “HIV-positive” and thereby put in fear of imminent death. Hundreds of thousands at least have been fed toxic drugs as purported treatment for the inferred “infection”. Large numbers of pregnant women, and their babies, have been fed drugs known to cause mitochondrial damage, which produces a lifelong burden of inefficient physiology on those babies.
This scale of iatrogenic damage has been done and continues to be done by reliance on “tests” known to be invalid. It is a cause for wonder, why academic researchers who can so honestly describe the flawed nature of these “tests” did not pen even a single sentence of warning about the consequences of accepting “test” results as valid.