HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘Bhupat D. Rawal’

CDC’s “model” assumptions (Science Studies 103a)

Posted by Henry Bauer on 2008/09/06

An earlier post showed that the CDC’s model of HIV/AIDS is simply wrong; it yields estimates of HIV infections that are contrary to repeatedly published data for at least the first decade of the AIDS era. A wade through that article describing the CDC’s latest estimates (Hall et al., “Estimation of HIV incidence in the United States”, JAMA, 300 [2008] 520-529) is less than reassuring about what CDC does and how it does it, albeit enlightening in a head-spinning journey-through-wonderland sort of way.

To take a relevant non-technical matter first:
“the CDC determined in 2005 and again in 2007 that HIV incidence surveillance is not a research activity and therefore does not require review by an institutional review board” — at least in part, presumably, because this is “routine HIV and AIDS surveillance.”
That determination was based on specified rules and regulations and evidently satisfied bureaucratic requirements, but it’s nevertheless nonsense, in an important way. When something is described as “research”, most everyone understands that there’s a degree of uncertainty attached to the output. When, on the other hand, something is NOT research, just routine surveillance, then there’s a clear implication that the outputs are factual and trustworthy. Slogging through the details of how the calculations are made shows quite convincingly, however, that one would be foolish to place much reliance on any resulting claims — even leaving aside that, as shown earlier, those outputs are at variance with published data from official and peer-reviewed sources stretching over more than a decade.
Why not have an institutional review board look at this activity? Well, perhaps such a review would consider the associated ethical issues, since human subjects are involved. Have they given informed consent? What are the consequences for a person who is told that an HIV infection is not only present but happens to be recent? How would it affect that person’s intimate relations? And so on. A bag of worms, best left unopened. You never know, no matter how carefully you choose members for such review boards, a troublemaker might slip through the vetting process.

——————-

The article’s conclusions imply a degree of certainty that’s entirely unwarranted:
“This study provides the first direct estimates of HIV incidence in the United States using laboratory technologies previously implemented only in clinic-based settings.”
What does “direct” estimates seek to convey, if not trustworthiness? Yet those estimates are anything but direct, given the avalanche of assumptions that go into those estimates.

The rationale for this research-that-isn’t-research is that “the incidence of HIV infection in the United States has never been directly measured”. True, because it couldn’t be, since acquiring “HIV infection” brings no symptoms with it. However, there have been multitudes of direct measurements of HIV prevalence; and that, together with deaths from AIDS whose reporting is legally mandated, permits calculation of incidence. As shown in the earlier post, those actual calculations demonstrate that these new “direct estimates of incidence” are dead wrong.

The crucial mistake in CDC’s models is, of course, the assumption that HIV causes AIDS. That leads to the further assumption that the incidence of HIV can be “back-calculated” from the incidence of AIDS diagnoses. Even were the first assumption correct, back-calculation would require everything to be known about the course of “HIV disease” following infection. Given that there must be individual differences, and that any one of some 30 diseases or conditions might be the manifestation of “HIV disease”, that’s impossible; therefore, another avalanche of interlocking assumptions blankets the model.

These considerations in themselves ought to be enough to vitiate the whole approach, but yet more assumptions are piled on. Possibly the most critical is the “new method” for determining whether infections are recent or not. The basic concept was described (for example) ten years ago in Janssen et al., “New testing strategy to detect early HIV-1 infection for use in incidence estimates and for clinical and prevention purposes”, JAMA 280 (1998) 42-8: it’s assumed that recent infections will be detectable by a sensitive antibody test and less recent ones will be detectable by a less sensitive antibody test. It’s long been accepted that it takes a matter of weeks or months after infection before tests can pick up HIV antibodies; so, the idea is, the levels of antibodies increase at not too rapid a rate, and using simultaneous sensitive and less sensitive assays can distinguish relatively new from relatively old infections. (Analogous earlier suggestions include Brookmeyer et al., American Journal of Epidemiology 141 [1995] 166-72 and Parekh et al., AIDS Research and Hum Retroviruses 18 [2002] 295-307.)

I invite — no, I urge interested parties to read the Janssen et al. paper. I can’t post the whole thing since it’s copyrighted by the American Medical Association, but here’s a “fair use” extract to give a taste of the proliferation of approximations and assumptions:

“We estimated distribution and mean time between seroconversion on the 3A11 assay and the 3A11-LS assay using a mathematical model . . . with a variety of cutoffs. To estimate time between seroconversion on the 2 assays, we assumed a progressive increase in antibody during early infection, producing for each subject a well-defined time on each assay before which results would be nonreactive and after which results would be reactive [but bear in mind that HIV antibody tests don’t give a definitive “yes/no” — a “well-defined time” was CHOSEN]; seroconversion time on the 3A11 assay was uniformly distributed [that is, the assumption of uniform distribution was made part of the model] between time of the last 3A11 nonreactive specimen and the time of the first 3A11 reactive specimen; 3A11-LS assay seroconversion occurred no earlier than 3A11 assay seroconversion [assumption: the less sensitive test could not be positive unless the more sensitive one was]; and time difference between seroconversion on the 3A11 and 3A11-LS assays was [assumed to be] independent of seroconversion time on the 3A11 assay. We modeled time between seroconversions using a discrete distribution that assigned a probability to each day from 0 to 3000 days, estimated by maximum likelihood based on observed data on times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays, using an EM algorithm approach.29 A smoothing step was added to the algorithm30 to speed convergence and produce smooth curves; a kernel smoother with a triangular kernel was used with bandwidth (h) of 20 days. Mean times between 3A11 and 3A11-LS sero- conversion were largely invariant for the range of days for smoothing bandwidths we considered (0#h#100). Confidence intervals (CIs) for mean time between seroconversions were obtained using the bootstrap percentile method.31 Day of 3A11 assay seroconversion was estimated from the model conditional on observed times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays and using estimated distribution of times between seroconversions. To assess ability of the testing strategy to accurately classify specimens obtained within 129 days of estimated day of 3A11 seroconversion and to correct for multiple specimens provided by subjects, we calculated the average proportion of each person’s specimens with 3A11 reactive/ 3A11-LS nonreactive results obtained in that period” [emphases added].

Now, I’m not suggesting that there’s anything untoward about RESEARCH along these lines; quite the contrary, it’s commendable that researchers lay out all the assumptions they make so that other researchers can mull over them and decide which ones were not good and should be modified, as work continues in the attempt to develop an adequate model. What’s inappropriate is that the outputs of such highly tentative guesswork morph over time into accepted shibboleths. The CDC’s recent revision of estimates accepts as valid this approach even while admitting that it had been found to give obviously wrong results in Africa and Thailand, namely, “the misclassification of specimens as recent among persons with long-term HIV infection or AIDS, which overestimates the proportion of specimens classified as recent”. Outsiders might draw the conclusion that there’s something basically wrong and that the approach needs refining; certainly before it gets applied in ways that lead to public announcements that spur politicians into misguided action, say, that medical insurance be required to cover the costs of routine HIV tests.   (Researchers, on the other hand, merely note such failures and press on with modifications that might decrease the likelihood of misleading results.)

So: Hall et al. begin with the assumption that HIV cause AIDS. They add the corollary that HIV incidence can be back-calculated from AIDS diagnoses, which requires additional assumptions about the time between HIV infection and AIDS — not just the average “latent period”, but how the latent period is distributed: is it a normal bell-curve distribution around a mean of 10 years? Or is it perhaps a Poisson distribution skewed toward longer times? Or something else again? The fact that the precise time of infection cannot be determined, only estimated on the basis of yet further assumptions, makes this part of the procedure inherently doubtful.

Heaped on top of these basic uncertainties are more specific ones pertaining to the recently revised estimate of HIV infections for the whole United States. The data actually used came from only 22 States. Of an estimated 39,400 newly diagnosed HIV-positives in 2006, 6864 were tested with the assay that had proved unreliable in Africa and Thailand, and 2133 of these were classified as recent infections, which led by extrapolation to an estimated 56,300 new infections in 2006 in the United States as a whole.

This 2008 publication asserts that the Janssen et al. approach “now makes it possible to directly measure HIV incidence”, citing articles published in 1995, 1998, and 2002. It refers to “new technology” and a “new system”, citing the 2002 article in conjunction with “R. H. Byers, PhD, unpublished data, July 2005”. A further assumption is the criterion that “a normalized optical density of less than 0.8 on the BED assay . . . [means that] the source patient is considered recently infected”. This hodge-podge is made to appear scientifically reliable by christening it “the serologic testing algorithm for recent HIV seroconversion (STARHS)”, citing Janssen et al. (published in 1998, remember).

The public call-to-arms about 56,300 new infections was based on this STARHS approach, fortified by an “extended back-calculation” yielding 55,400 infections per year during 2003-6, the back calculation being based on “1.230 million HIV/AIDS cases reported by the end of 2006”.

Once again: Researchers can be properly pleased when two approaches yield closely the same result, 56,300 and 55,400. It means that what they’re doing is self-consistent.

But self-consistent doesn’t mean correct, true to reality. Outsiders might note, however, and policy makers badly need to note, that both approaches are based on the same basic assumptions, namely, that HIV entered the USA in the late 1970s and that HIV causes AIDS. But those assumptions are at glaring odds with a number of facts.

For one, the report that first led me to look at HIV-test data: that in the mid-1980s, teenaged females from all around the country were testing HIV-positive at the same rate as their male peers. In other words, a sexual infection that got its foothold around 1980 among gay men and shortly thereafter in injecting drug users had, within a few years, become distributed throughout the whole United States to the stage that teenagers planning to go into military service, and therefore rather unlikely to have been heavily into drug abuse or unsafe sex with gay men in large cities, would have already caught this lethal bug. Not only that: although this infectious disease-causing agent was already so pervasively distributed around the country, the disease itself was not.

That early publication (Burke et al., JAMA 263 [1990] 2074-7) also reported that the greatest prevalence of HIV-positive was NOT in the places where AIDS was most to be found; the male-to-female rates of HIV-positive were nothing like those for AIDS; and testing HIV-positive was more likely for black youngsters from regions with little AIDS than for white youngsters from regions with much AIDS.

No more should have been needed, one might well suggest, to scotch once and for all the mistaken connection between AIDS and HIV-positive. Instead, we are now inundated in houses of cards held together by a proliferation of assumptions modified ad hoc, all preventing research on the really pressing matters:

What does testing HIV-positive mean in the case of each individual? What should people do, who are told they are HIV-positive? What is the best treatment for people presenting with opportunistic infections?

Posted in experts, HIV absurdities, HIV and race, HIV does not cause AIDS, HIV risk groups, HIV skepticism, HIV tests, HIV transmission, HIV/AIDS numbers, M/F ratios, sexual transmission | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments »

Measuring VIRAL LOAD WITHOUT VIRUS: Where are the virions?

Posted by Henry Bauer on 2008/08/10

A continuing puzzle, at least for this lay person, is why HIV/AIDS researchers have never bothered to extract virions—whole particles of HIV—from HIV-positive people or from AIDS patients. Soon after “infection”, after all, the former are supposed to be teeming with virus, and AIDS victims are supposed to be full of virus (again) by the time opportunistic infections get a foothold; according to Fauci et al., there are then about 1,000,000 million and 100,000 “HIV RNA copies”, respectively, in each milliliter of plasma, each copy supposedly representing a virion:

Since primary infection and “acute viral syndrome” are often unaccompanied by any clinical symptoms—at best (or worst) mild flu-like signs or rashes—I had long thought that it would be unfair to chide mainstream researchers for failing to extract genuine virus at that stage. But, it turns out, some researchers have been able to carry out sophisticated studies of blood drawn during those critical initial weeks of primary infection.

Gasper-Smith et al. report on “Induction of plasma (TRAIL), TNFR-2, Fas ligand, and plasma microparticles after Human Immunodeficiency Virus Type 1 (HIV-1) transmission: Implications for HIV-1 vaccine design”, Journal of Virology 82 [2008] 7700-10. They conclude that “Release of products of cell death and subsequent immunosuppression following HIV-1 transmission could potentially narrow the window of opportunity during which a vaccine is able to extinguish HIV-1 infection and could place severe constraints on the amount of time available for the immune system to respond to the transmitted virus”.

The researchers had been able to obtain from ZeptoMetrix Corporation of Buffalo (NY) “seroconversion panels” consisting of “sequential aliquots of plasma (range, 4 to 30 aliquots) collected approximately every 3 days during the time of acute infection with HIV-1”; they cite, for the availability of these seroconversion panels, Fiebig et al., “Dynamics of HIV viremia and antibody seroconversion in plasma donors: Implications for diagnosis and staging of primary HIV infection” , AIDS 17 [2003] 1871-9.

Here, it seemed to me, had been an ideal opportunity to extract veritable whole particles of HIV generated during the acute initial infection. But the only mention of “virion” in the Gasper-Smith article is in this sentence: “While the average peak HIV-1 VL level was 1,421,628 copies/ ml, the average total MP peak level was 606,881,733/ml. Thus, at the times of maximum VL and MP levels, the average number of MPs was 427 times larger than the average number of virions”. “VL” of course is viral load. “MP” is not military police (or, as Lucas reminded me, Members of Parliament), it is “microparticles”:

“MPs are small membrane-bound vesicles that are released from the surface of apoptotic cells by exocytic or budding processes; . . . . MPs, which circulate in the blood under many clinical conditions, are part of a spectrum of subcellular structures that are released from cells and can be distinguished from exosomes . . . . MPs have immunomodulatory activities and can promote immune cell death; exosomes are also immunologically active, can suppress immune responses . . . , and have been reported to have been found at elevated levels in cases of chronic HIV-1 infection . . . . If elevations in levels of immunosuppressive molecules, coupled with early CD4+ T-cell death, occur early following HIV-1 transmission, then these events could potentially define a protected time during which HIV-1 is able to replicate while anti-HIV-1 T- or B-cell responses are suppressed” [emphases added].

Gasper-Smith et al. counted and extracted and studied the MPs by flow cytometry and electron microscopy. Why did they not also study HIV particles? Did the freezing and storing of the plasma destroy HIV virions while leaving MPs intact?

There were 427 times as many MPs as copies of RNA supposed to stem from HIV. MPs can “promote immune cell death”. How do we know that the CD4 cells supposedly killed by HIV weren’t killed by the MPs?

Though phrased rhetorically and left unanswered, I intend those questions to be taken quite seriously. If I wanted to be flippant or sarcastic, I might have commented once again on the peculiar penchant among HIV/AIDS researchers to imply that their measurements are accurate to an impossible number of significant figures when they report MPs of “606,881,733/ml”. That’s one of the drawbacks of the digital age, I suppose. In the good old days when we read measurements off scales with pointers, we weren’t tempted to write down meaningless numbers.

Perhaps Fiebig et al., cited by Gasper-Smith et al. for the brilliant idea of getting those stored samples from blood donors, had looked for whole particles of HIV?

“Because of the difficulty in obtaining blood samples representing early acute HIV infection from clinical patients, most patients do not come to medical attention until weeks to months after infection, we resorted to stored, frozen plasma collections from plasma donors, who unrelated to donating became infected with HIV, and were deferred from further donating. As plasma donors donate on average twice a week, and every donation is tested for HIV and held for 60 days before release, their archived samples provide a unique record of the infection from timepoints before viral exposure until seroconversion and beyond. . . . Plasma donations (600-800 ml) from source plasma donors were routinely collected at approximately twice weekly intervals and stored frozen at -20oC or less.”

Plenty of material to work with, it would seem—600 ml is well over a pint, and ought to contain many millions of HIV virions, at “1,421,628” per ml.

But, NO. In the Fiebig article, there’s not a single mention of “virion”. They used ELISA, p24 antigen, and HIV-1-RNA tests to determine how much “HIV” was present.

—————————

Is the failure to even try to extract virions somehow related to the fact that Gallo was more often able to “isolate” HIV from “pre-AIDS” patients than from those who actually had AIDS? Here’s from the Abstract of Gallo’s ground-breaking article that followed the press conference announcing discovery of the probable cause of AIDS:

“Peripheral blood lymphocytes from patients with the acquired immunodeficiency syndrome (AIDS) or with signs or symptoms that frequently precede AIDS (pre-AIDS) were grown in vitro with added T-cell growth factor and assayed for the expression and release of human T-lymphotropic retroviruses (HTLV)” (Gallo et al., Science 224 [1984] 500-3).

That’s what Gallo means by “isolation”, as other rethinkers have often remarked. It’s not the commonly used meaning of the word, namely, “extraction” or “separation from”. And it’s not as though the “assaying” involved separating virions from those cultures, either.

“Retroviruses . . . were isolated from a total of 48 subjects including 18 of 21 patients with pre-AIDS, three of four clinically normal mothers of juveniles with AIDS, 26 of 72 adult and juvenile patients with AIDS, and from one of 22 normal male homosexual subjects”.

Why from more pre-AIDS than from actual AIDS patients?

The Abstract ends with “These results and those reported elsewhere in this issue suggest that HTLV-III may be the primary cause of AIDS” [emphases added].

From that modest suggestion, the dogma that HIV causes AIDS evolved without the benefit of direct isolation—extraction, separation—of whole infectious virions from even a single HIV-positive or AIDS-suffering person, or from plasma preserved from periods of “acute viral syndrome”.

Posted in HIV skepticism, HIV/AIDS numbers | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 9 Comments »