HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘J. S. McDougal’

CDC’s “model” assumptions (Science Studies 103a)

Posted by Henry Bauer on 2008/09/06

An earlier post showed that the CDC’s model of HIV/AIDS is simply wrong; it yields estimates of HIV infections that are contrary to repeatedly published data for at least the first decade of the AIDS era. A wade through that article describing the CDC’s latest estimates (Hall et al., “Estimation of HIV incidence in the United States”, JAMA, 300 [2008] 520-529) is less than reassuring about what CDC does and how it does it, albeit enlightening in a head-spinning journey-through-wonderland sort of way.

To take a relevant non-technical matter first:
“the CDC determined in 2005 and again in 2007 that HIV incidence surveillance is not a research activity and therefore does not require review by an institutional review board” — at least in part, presumably, because this is “routine HIV and AIDS surveillance.”
That determination was based on specified rules and regulations and evidently satisfied bureaucratic requirements, but it’s nevertheless nonsense, in an important way. When something is described as “research”, most everyone understands that there’s a degree of uncertainty attached to the output. When, on the other hand, something is NOT research, just routine surveillance, then there’s a clear implication that the outputs are factual and trustworthy. Slogging through the details of how the calculations are made shows quite convincingly, however, that one would be foolish to place much reliance on any resulting claims — even leaving aside that, as shown earlier, those outputs are at variance with published data from official and peer-reviewed sources stretching over more than a decade.
Why not have an institutional review board look at this activity? Well, perhaps such a review would consider the associated ethical issues, since human subjects are involved. Have they given informed consent? What are the consequences for a person who is told that an HIV infection is not only present but happens to be recent? How would it affect that person’s intimate relations? And so on. A bag of worms, best left unopened. You never know, no matter how carefully you choose members for such review boards, a troublemaker might slip through the vetting process.


The article’s conclusions imply a degree of certainty that’s entirely unwarranted:
“This study provides the first direct estimates of HIV incidence in the United States using laboratory technologies previously implemented only in clinic-based settings.”
What does “direct” estimates seek to convey, if not trustworthiness? Yet those estimates are anything but direct, given the avalanche of assumptions that go into those estimates.

The rationale for this research-that-isn’t-research is that “the incidence of HIV infection in the United States has never been directly measured”. True, because it couldn’t be, since acquiring “HIV infection” brings no symptoms with it. However, there have been multitudes of direct measurements of HIV prevalence; and that, together with deaths from AIDS whose reporting is legally mandated, permits calculation of incidence. As shown in the earlier post, those actual calculations demonstrate that these new “direct estimates of incidence” are dead wrong.

The crucial mistake in CDC’s models is, of course, the assumption that HIV causes AIDS. That leads to the further assumption that the incidence of HIV can be “back-calculated” from the incidence of AIDS diagnoses. Even were the first assumption correct, back-calculation would require everything to be known about the course of “HIV disease” following infection. Given that there must be individual differences, and that any one of some 30 diseases or conditions might be the manifestation of “HIV disease”, that’s impossible; therefore, another avalanche of interlocking assumptions blankets the model.

These considerations in themselves ought to be enough to vitiate the whole approach, but yet more assumptions are piled on. Possibly the most critical is the “new method” for determining whether infections are recent or not. The basic concept was described (for example) ten years ago in Janssen et al., “New testing strategy to detect early HIV-1 infection for use in incidence estimates and for clinical and prevention purposes”, JAMA 280 (1998) 42-8: it’s assumed that recent infections will be detectable by a sensitive antibody test and less recent ones will be detectable by a less sensitive antibody test. It’s long been accepted that it takes a matter of weeks or months after infection before tests can pick up HIV antibodies; so, the idea is, the levels of antibodies increase at not too rapid a rate, and using simultaneous sensitive and less sensitive assays can distinguish relatively new from relatively old infections. (Analogous earlier suggestions include Brookmeyer et al., American Journal of Epidemiology 141 [1995] 166-72 and Parekh et al., AIDS Research and Hum Retroviruses 18 [2002] 295-307.)

I invite — no, I urge interested parties to read the Janssen et al. paper. I can’t post the whole thing since it’s copyrighted by the American Medical Association, but here’s a “fair use” extract to give a taste of the proliferation of approximations and assumptions:

“We estimated distribution and mean time between seroconversion on the 3A11 assay and the 3A11-LS assay using a mathematical model . . . with a variety of cutoffs. To estimate time between seroconversion on the 2 assays, we assumed a progressive increase in antibody during early infection, producing for each subject a well-defined time on each assay before which results would be nonreactive and after which results would be reactive [but bear in mind that HIV antibody tests don’t give a definitive “yes/no” — a “well-defined time” was CHOSEN]; seroconversion time on the 3A11 assay was uniformly distributed [that is, the assumption of uniform distribution was made part of the model] between time of the last 3A11 nonreactive specimen and the time of the first 3A11 reactive specimen; 3A11-LS assay seroconversion occurred no earlier than 3A11 assay seroconversion [assumption: the less sensitive test could not be positive unless the more sensitive one was]; and time difference between seroconversion on the 3A11 and 3A11-LS assays was [assumed to be] independent of seroconversion time on the 3A11 assay. We modeled time between seroconversions using a discrete distribution that assigned a probability to each day from 0 to 3000 days, estimated by maximum likelihood based on observed data on times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays, using an EM algorithm approach.29 A smoothing step was added to the algorithm30 to speed convergence and produce smooth curves; a kernel smoother with a triangular kernel was used with bandwidth (h) of 20 days. Mean times between 3A11 and 3A11-LS sero- conversion were largely invariant for the range of days for smoothing bandwidths we considered (0#h#100). Confidence intervals (CIs) for mean time between seroconversions were obtained using the bootstrap percentile method.31 Day of 3A11 assay seroconversion was estimated from the model conditional on observed times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays and using estimated distribution of times between seroconversions. To assess ability of the testing strategy to accurately classify specimens obtained within 129 days of estimated day of 3A11 seroconversion and to correct for multiple specimens provided by subjects, we calculated the average proportion of each person’s specimens with 3A11 reactive/ 3A11-LS nonreactive results obtained in that period” [emphases added].

Now, I’m not suggesting that there’s anything untoward about RESEARCH along these lines; quite the contrary, it’s commendable that researchers lay out all the assumptions they make so that other researchers can mull over them and decide which ones were not good and should be modified, as work continues in the attempt to develop an adequate model. What’s inappropriate is that the outputs of such highly tentative guesswork morph over time into accepted shibboleths. The CDC’s recent revision of estimates accepts as valid this approach even while admitting that it had been found to give obviously wrong results in Africa and Thailand, namely, “the misclassification of specimens as recent among persons with long-term HIV infection or AIDS, which overestimates the proportion of specimens classified as recent”. Outsiders might draw the conclusion that there’s something basically wrong and that the approach needs refining; certainly before it gets applied in ways that lead to public announcements that spur politicians into misguided action, say, that medical insurance be required to cover the costs of routine HIV tests.   (Researchers, on the other hand, merely note such failures and press on with modifications that might decrease the likelihood of misleading results.)

So: Hall et al. begin with the assumption that HIV cause AIDS. They add the corollary that HIV incidence can be back-calculated from AIDS diagnoses, which requires additional assumptions about the time between HIV infection and AIDS — not just the average “latent period”, but how the latent period is distributed: is it a normal bell-curve distribution around a mean of 10 years? Or is it perhaps a Poisson distribution skewed toward longer times? Or something else again? The fact that the precise time of infection cannot be determined, only estimated on the basis of yet further assumptions, makes this part of the procedure inherently doubtful.

Heaped on top of these basic uncertainties are more specific ones pertaining to the recently revised estimate of HIV infections for the whole United States. The data actually used came from only 22 States. Of an estimated 39,400 newly diagnosed HIV-positives in 2006, 6864 were tested with the assay that had proved unreliable in Africa and Thailand, and 2133 of these were classified as recent infections, which led by extrapolation to an estimated 56,300 new infections in 2006 in the United States as a whole.

This 2008 publication asserts that the Janssen et al. approach “now makes it possible to directly measure HIV incidence”, citing articles published in 1995, 1998, and 2002. It refers to “new technology” and a “new system”, citing the 2002 article in conjunction with “R. H. Byers, PhD, unpublished data, July 2005”. A further assumption is the criterion that “a normalized optical density of less than 0.8 on the BED assay . . . [means that] the source patient is considered recently infected”. This hodge-podge is made to appear scientifically reliable by christening it “the serologic testing algorithm for recent HIV seroconversion (STARHS)”, citing Janssen et al. (published in 1998, remember).

The public call-to-arms about 56,300 new infections was based on this STARHS approach, fortified by an “extended back-calculation” yielding 55,400 infections per year during 2003-6, the back calculation being based on “1.230 million HIV/AIDS cases reported by the end of 2006”.

Once again: Researchers can be properly pleased when two approaches yield closely the same result, 56,300 and 55,400. It means that what they’re doing is self-consistent.

But self-consistent doesn’t mean correct, true to reality. Outsiders might note, however, and policy makers badly need to note, that both approaches are based on the same basic assumptions, namely, that HIV entered the USA in the late 1970s and that HIV causes AIDS. But those assumptions are at glaring odds with a number of facts.

For one, the report that first led me to look at HIV-test data: that in the mid-1980s, teenaged females from all around the country were testing HIV-positive at the same rate as their male peers. In other words, a sexual infection that got its foothold around 1980 among gay men and shortly thereafter in injecting drug users had, within a few years, become distributed throughout the whole United States to the stage that teenagers planning to go into military service, and therefore rather unlikely to have been heavily into drug abuse or unsafe sex with gay men in large cities, would have already caught this lethal bug. Not only that: although this infectious disease-causing agent was already so pervasively distributed around the country, the disease itself was not.

That early publication (Burke et al., JAMA 263 [1990] 2074-7) also reported that the greatest prevalence of HIV-positive was NOT in the places where AIDS was most to be found; the male-to-female rates of HIV-positive were nothing like those for AIDS; and testing HIV-positive was more likely for black youngsters from regions with little AIDS than for white youngsters from regions with much AIDS.

No more should have been needed, one might well suggest, to scotch once and for all the mistaken connection between AIDS and HIV-positive. Instead, we are now inundated in houses of cards held together by a proliferation of assumptions modified ad hoc, all preventing research on the really pressing matters:

What does testing HIV-positive mean in the case of each individual? What should people do, who are told they are HIV-positive? What is the best treatment for people presenting with opportunistic infections?

Posted in experts, HIV absurdities, HIV and race, HIV does not cause AIDS, HIV risk groups, HIV skepticism, HIV tests, HIV transmission, HIV/AIDS numbers, M/F ratios, sexual transmission | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments »

%d bloggers like this: