HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘Ron Brookmeyer’

CDC’s “model” assumptions (Science Studies 103a)

Posted by Henry Bauer on 2008/09/06

An earlier post showed that the CDC’s model of HIV/AIDS is simply wrong; it yields estimates of HIV infections that are contrary to repeatedly published data for at least the first decade of the AIDS era. A wade through that article describing the CDC’s latest estimates (Hall et al., “Estimation of HIV incidence in the United States”, JAMA, 300 [2008] 520-529) is less than reassuring about what CDC does and how it does it, albeit enlightening in a head-spinning journey-through-wonderland sort of way.

To take a relevant non-technical matter first:
“the CDC determined in 2005 and again in 2007 that HIV incidence surveillance is not a research activity and therefore does not require review by an institutional review board” — at least in part, presumably, because this is “routine HIV and AIDS surveillance.”
That determination was based on specified rules and regulations and evidently satisfied bureaucratic requirements, but it’s nevertheless nonsense, in an important way. When something is described as “research”, most everyone understands that there’s a degree of uncertainty attached to the output. When, on the other hand, something is NOT research, just routine surveillance, then there’s a clear implication that the outputs are factual and trustworthy. Slogging through the details of how the calculations are made shows quite convincingly, however, that one would be foolish to place much reliance on any resulting claims — even leaving aside that, as shown earlier, those outputs are at variance with published data from official and peer-reviewed sources stretching over more than a decade.
Why not have an institutional review board look at this activity? Well, perhaps such a review would consider the associated ethical issues, since human subjects are involved. Have they given informed consent? What are the consequences for a person who is told that an HIV infection is not only present but happens to be recent? How would it affect that person’s intimate relations? And so on. A bag of worms, best left unopened. You never know, no matter how carefully you choose members for such review boards, a troublemaker might slip through the vetting process.

——————-

The article’s conclusions imply a degree of certainty that’s entirely unwarranted:
“This study provides the first direct estimates of HIV incidence in the United States using laboratory technologies previously implemented only in clinic-based settings.”
What does “direct” estimates seek to convey, if not trustworthiness? Yet those estimates are anything but direct, given the avalanche of assumptions that go into those estimates.

The rationale for this research-that-isn’t-research is that “the incidence of HIV infection in the United States has never been directly measured”. True, because it couldn’t be, since acquiring “HIV infection” brings no symptoms with it. However, there have been multitudes of direct measurements of HIV prevalence; and that, together with deaths from AIDS whose reporting is legally mandated, permits calculation of incidence. As shown in the earlier post, those actual calculations demonstrate that these new “direct estimates of incidence” are dead wrong.

The crucial mistake in CDC’s models is, of course, the assumption that HIV causes AIDS. That leads to the further assumption that the incidence of HIV can be “back-calculated” from the incidence of AIDS diagnoses. Even were the first assumption correct, back-calculation would require everything to be known about the course of “HIV disease” following infection. Given that there must be individual differences, and that any one of some 30 diseases or conditions might be the manifestation of “HIV disease”, that’s impossible; therefore, another avalanche of interlocking assumptions blankets the model.

These considerations in themselves ought to be enough to vitiate the whole approach, but yet more assumptions are piled on. Possibly the most critical is the “new method” for determining whether infections are recent or not. The basic concept was described (for example) ten years ago in Janssen et al., “New testing strategy to detect early HIV-1 infection for use in incidence estimates and for clinical and prevention purposes”, JAMA 280 (1998) 42-8: it’s assumed that recent infections will be detectable by a sensitive antibody test and less recent ones will be detectable by a less sensitive antibody test. It’s long been accepted that it takes a matter of weeks or months after infection before tests can pick up HIV antibodies; so, the idea is, the levels of antibodies increase at not too rapid a rate, and using simultaneous sensitive and less sensitive assays can distinguish relatively new from relatively old infections. (Analogous earlier suggestions include Brookmeyer et al., American Journal of Epidemiology 141 [1995] 166-72 and Parekh et al., AIDS Research and Hum Retroviruses 18 [2002] 295-307.)

I invite — no, I urge interested parties to read the Janssen et al. paper. I can’t post the whole thing since it’s copyrighted by the American Medical Association, but here’s a “fair use” extract to give a taste of the proliferation of approximations and assumptions:

“We estimated distribution and mean time between seroconversion on the 3A11 assay and the 3A11-LS assay using a mathematical model . . . with a variety of cutoffs. To estimate time between seroconversion on the 2 assays, we assumed a progressive increase in antibody during early infection, producing for each subject a well-defined time on each assay before which results would be nonreactive and after which results would be reactive [but bear in mind that HIV antibody tests don’t give a definitive “yes/no” — a “well-defined time” was CHOSEN]; seroconversion time on the 3A11 assay was uniformly distributed [that is, the assumption of uniform distribution was made part of the model] between time of the last 3A11 nonreactive specimen and the time of the first 3A11 reactive specimen; 3A11-LS assay seroconversion occurred no earlier than 3A11 assay seroconversion [assumption: the less sensitive test could not be positive unless the more sensitive one was]; and time difference between seroconversion on the 3A11 and 3A11-LS assays was [assumed to be] independent of seroconversion time on the 3A11 assay. We modeled time between seroconversions using a discrete distribution that assigned a probability to each day from 0 to 3000 days, estimated by maximum likelihood based on observed data on times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays, using an EM algorithm approach.29 A smoothing step was added to the algorithm30 to speed convergence and produce smooth curves; a kernel smoother with a triangular kernel was used with bandwidth (h) of 20 days. Mean times between 3A11 and 3A11-LS sero- conversion were largely invariant for the range of days for smoothing bandwidths we considered (0#h#100). Confidence intervals (CIs) for mean time between seroconversions were obtained using the bootstrap percentile method.31 Day of 3A11 assay seroconversion was estimated from the model conditional on observed times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays and using estimated distribution of times between seroconversions. To assess ability of the testing strategy to accurately classify specimens obtained within 129 days of estimated day of 3A11 seroconversion and to correct for multiple specimens provided by subjects, we calculated the average proportion of each person’s specimens with 3A11 reactive/ 3A11-LS nonreactive results obtained in that period” [emphases added].

Now, I’m not suggesting that there’s anything untoward about RESEARCH along these lines; quite the contrary, it’s commendable that researchers lay out all the assumptions they make so that other researchers can mull over them and decide which ones were not good and should be modified, as work continues in the attempt to develop an adequate model. What’s inappropriate is that the outputs of such highly tentative guesswork morph over time into accepted shibboleths. The CDC’s recent revision of estimates accepts as valid this approach even while admitting that it had been found to give obviously wrong results in Africa and Thailand, namely, “the misclassification of specimens as recent among persons with long-term HIV infection or AIDS, which overestimates the proportion of specimens classified as recent”. Outsiders might draw the conclusion that there’s something basically wrong and that the approach needs refining; certainly before it gets applied in ways that lead to public announcements that spur politicians into misguided action, say, that medical insurance be required to cover the costs of routine HIV tests.   (Researchers, on the other hand, merely note such failures and press on with modifications that might decrease the likelihood of misleading results.)

So: Hall et al. begin with the assumption that HIV cause AIDS. They add the corollary that HIV incidence can be back-calculated from AIDS diagnoses, which requires additional assumptions about the time between HIV infection and AIDS — not just the average “latent period”, but how the latent period is distributed: is it a normal bell-curve distribution around a mean of 10 years? Or is it perhaps a Poisson distribution skewed toward longer times? Or something else again? The fact that the precise time of infection cannot be determined, only estimated on the basis of yet further assumptions, makes this part of the procedure inherently doubtful.

Heaped on top of these basic uncertainties are more specific ones pertaining to the recently revised estimate of HIV infections for the whole United States. The data actually used came from only 22 States. Of an estimated 39,400 newly diagnosed HIV-positives in 2006, 6864 were tested with the assay that had proved unreliable in Africa and Thailand, and 2133 of these were classified as recent infections, which led by extrapolation to an estimated 56,300 new infections in 2006 in the United States as a whole.

This 2008 publication asserts that the Janssen et al. approach “now makes it possible to directly measure HIV incidence”, citing articles published in 1995, 1998, and 2002. It refers to “new technology” and a “new system”, citing the 2002 article in conjunction with “R. H. Byers, PhD, unpublished data, July 2005”. A further assumption is the criterion that “a normalized optical density of less than 0.8 on the BED assay . . . [means that] the source patient is considered recently infected”. This hodge-podge is made to appear scientifically reliable by christening it “the serologic testing algorithm for recent HIV seroconversion (STARHS)”, citing Janssen et al. (published in 1998, remember).

The public call-to-arms about 56,300 new infections was based on this STARHS approach, fortified by an “extended back-calculation” yielding 55,400 infections per year during 2003-6, the back calculation being based on “1.230 million HIV/AIDS cases reported by the end of 2006”.

Once again: Researchers can be properly pleased when two approaches yield closely the same result, 56,300 and 55,400. It means that what they’re doing is self-consistent.

But self-consistent doesn’t mean correct, true to reality. Outsiders might note, however, and policy makers badly need to note, that both approaches are based on the same basic assumptions, namely, that HIV entered the USA in the late 1970s and that HIV causes AIDS. But those assumptions are at glaring odds with a number of facts.

For one, the report that first led me to look at HIV-test data: that in the mid-1980s, teenaged females from all around the country were testing HIV-positive at the same rate as their male peers. In other words, a sexual infection that got its foothold around 1980 among gay men and shortly thereafter in injecting drug users had, within a few years, become distributed throughout the whole United States to the stage that teenagers planning to go into military service, and therefore rather unlikely to have been heavily into drug abuse or unsafe sex with gay men in large cities, would have already caught this lethal bug. Not only that: although this infectious disease-causing agent was already so pervasively distributed around the country, the disease itself was not.

That early publication (Burke et al., JAMA 263 [1990] 2074-7) also reported that the greatest prevalence of HIV-positive was NOT in the places where AIDS was most to be found; the male-to-female rates of HIV-positive were nothing like those for AIDS; and testing HIV-positive was more likely for black youngsters from regions with little AIDS than for white youngsters from regions with much AIDS.

No more should have been needed, one might well suggest, to scotch once and for all the mistaken connection between AIDS and HIV-positive. Instead, we are now inundated in houses of cards held together by a proliferation of assumptions modified ad hoc, all preventing research on the really pressing matters:

What does testing HIV-positive mean in the case of each individual? What should people do, who are told they are HIV-positive? What is the best treatment for people presenting with opportunistic infections?

Posted in experts, HIV absurdities, HIV and race, HIV does not cause AIDS, HIV risk groups, HIV skepticism, HIV tests, HIV transmission, HIV/AIDS numbers, M/F ratios, sexual transmission | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments »

Science Studies 103: Science, Truth, Public Policy — What the CDC should know but doesn’t

Posted by Henry Bauer on 2008/09/04

For decades, politicians have increasingly taken expert scientific advice as a guide to public policy. That’s wonderful in principle, but not necessarily in practice, because outsiders don’t understand that the experts’ advice is based on scientific knowledge, which is always tentative and temporary; scientific theories have a limited lifetime. It’s a widespread illusion, with seriously debilitating consequences, that “science” is synonymous with “truth”.

Not that anyone would, if asked, admit that they believe that science = truth; but observe how science is commonly talked about. If we want to emphasize that something really is so, we say it’s “scientific”. If we want to denigrate something as being untrue, we call it “unscientific”; or if we want to be nasty, we say it’s “pseudo-scientific”. “Tests have shown that …” somehow doesn’t seem convincing enough, so we say, “Scientific tests have shown …”, and then we need no longer fear any contradiction.

Policy makers ought to know that advice from scientific experts is fallible. Much public policy has to be based on judgments in situations where the facts are not certain, and decisions have to be made about possible benefits balanced by possible drawbacks. Therefore

the prime responsibility of technical experts
whose advice informs politicians
is to make as clear as possible
the uncertainties in what they think they know
.

Otherwise, policy is influenced by judgments made unconsciously by the technical experts, who may see the trees fairly well but who are usually ignorant about the forest.

For example, when the Centers for Disease Control and Prevention (CDC) publish estimates of something, their responsibility is to make plain, indeed to emphasize, the limits of uncertainty in those estimates. This the CDC singularly and repeatedly fails to do; for example, it issued a press release in 2005 that HIV infections has surpassed 1 million “for the first time”, when it had already released estimates of about 1 million throughout the previous twenty years (p. 1 in The Origin, Persistence and Failings of HIV/AIDS Theory).

This post was prompted by the brouhaha following CDC’s recent announcement that its earlier estimate of about 40,000 annual HIV infections had been too low, its new estimate being 56,000. CDC was hardly a shrinking violet with this revision: “The new data is [sic] scheduled for publication in the peer-reviewed Journal of the American Medical Association. The report’s release is meant to coincide with the opening Sunday of the biannual International AIDS Conference in Mexico City, Mexico.”
“Dr. Kevin Fenton, director of the CDC’s National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention”, proclaimed that “‘The fact that 56,000 Americans each year are contracting HIV for the first time is a wake-up call for all of us in the U.S.’ . . . [CDC] is now using technology capable of determining when someone was infected. The new method can indicate whether someone has been infected with HIV during the previous five months, rather than relying on statistical models. Diagnosis of HIV can occur years after infection”.

News accounts do not always reflect accurately, of course, what a speaker or a press release says, but in this instance it was evidently something that could easily lead a listener or reader to believe that some “new method” had supplanted “statistical models” — which is entirely untrue. A few media accounts did mention that this new number is simply a revised estimate, not a claim that the rate of HIV infections has been on the increase. What none of the media accounts that I have seen has pointed out is how fraught with assumptions and uncertainties this new estimate is, and how wrong are the conclusions of this “new method” when tested against reported “HIV” numbers from earlier years. Figure A shows what the CDC’s “new method”, combined with its computer-statistical model, “predicts” new HIV infections to have been since before the beginning of the AIDS era (source: Hall et al., “Estimation of HIV incidence in the United States”, JAMA, 300 [2008] 520-529).

Figure A

I’ve highlighted several clues to how uncertain all this is, though the clues are not hard to recognize. What is not at all uncertain, though, is that the estimates given in this Figure are totally at odds with data-based estimates of HIV infections during at least the first decade of the AIDS era.

From the Y-axis scale, Figure A yields the numbers in Table A, column I. Column II lists the AIDS deaths during the relevant periods (from “Health, United States”  reports ). Column III is the net estimated prevalence, namely, the cumulation of annual new infections in Column I minus the deaths. Column IV lists earlier estimates from official reports and peer-reviewed articles. The CDC’s “new method” combined with their computer-statistical model constitutes a drastic re-writing of history. And just as the Soviet Union rewrote history all the time without mentioning the old version — let alone explaining what was wrong with it —, CDC fails to mention the numbers it and peer-reviewed articles had propagated during the 1980s and 1990s.

Those earlier estimates in Column IV had been made in a quite straightforward manner. The actually measured rate of testing “HIV-positive” in various groups was multiplied by the size of each group. Military cohorts, blood donors, and Job Corps members were routinely tested. Sentinel surveys had been carried out in hospitals and a range of clinics, and special attention had been paid to sampling homosexual men and injecting drug users. The only uncertainty was in estimating the sizes of the various groups, but good information was available about most of those, and moreover there was a National Household Survey that provided a good check on what is typical for the general population overall. Persistently over two decades, the result was an approximately constant prevalence of something like 1 million.

That fact is incompatible, however, with HIV/AIDS theory, which insists that “HIV” somehow entered the USA in the late 1970s. Naturally, the CDC’s model incorporates that assumption even though it remains unproven and is incompatible with surveillance of HIV infections since 1985. Now CDC continues its attempt to shape public policy with numbers derived from a model whose validity is not merely uncertain, it’s demonstrably invalid.

That seems incredible, but Science Studies once again offers insight. The modelers know they are “just” modeling, trying to establish the best possible algorithms to describe what’s happening. In their publication, they scrupulously set out all the assumptions made in this latest set of calculations; indeed, almost the whole text of the article describes one assumption after another. The failure to discuss how incompatible the model is with data from 1985 through the late 1990s is, plausibly, because the authors are not even aware of those earlier publications — they’re working on this particular model, that’s all. The blame, if any, should be directed at the administrators and supervisors, whose responsibility it is to know something about the forest, and most particularly about the CDC’s responsibility to the wider society: not to arouse panic without good cause, for example; to ensure that press releases are so clear to lay people that the media will not misrepresent them. But CDC big-shots, like bureaucrats in other agencies, suffer inevitable conflicts of interest: they want to attract the largest possible funding and to gain the highest possible public appreciation, esteem, prestige. That’s why, in the early days of AIDS, the CDC had hired a PR firm to convince everybody that AIDS was a threat to every last person, even as they knew that it wasn’t (Bennett & Sharpe, “AIDS fight is skewed by federal campaign exaggerating risks”, Wall Street Journal 1 May, 1996, A1, 6.)

At any rate, this latest misleading of the public, seemingly not unintentional, is far from unprecedented. The crimes and misdemeanors of CDC models are legion; see, for example, “Numbers”, “Getting the desired numbers”, and “Reporting and guesstimating”, respectively p. 135 ff., p. 203 ff., and p. 220 ff. in The Origin, Persistence and Failings of HIV/AIDS Theory). Consider the instance in Table B of CDC-model output that was wildly off the mark. The modelers had seen fit to publish this, as though it were somehow worthy of attention, when the calculated male-to-female ratios for “HIV-positive” are completely unlike anything encountered in actual HIV tests in any group for which such data had been reported for the previous dozen years.

Not that WHO or UNAIDS models are any better, as James Chin — who designed and used some of them — has pointed out cogently (“The AIDS Pandemic”).  Jonathan Mann was one of the first international HIV/AIDS gurus, responsible for authoritative edited collections like “AIDS in the World II: Global Dimensions, Social Roots, and Responses” (ed. Jonathan A. Mann & Daniel J. M. Tarantola, Oxford University Press, 1996) . In that volume, the cumulative number of HIV infections in the USA is confidently reported as in Table C below. On the one hand, I give Mann et al. high marks for restricting themselves to 4 significant figures and avoiding the now-standard HIV/AIDS-researchers’ penchant for giving all computer outputs to the nearest person. On the other hand, their estimates are in total disagreement with those based on actual data obtained during the relevant years.

The CDC’s 2008 model is a bit closer than WHO’s to the data-based estimates, but it’s still wildly off the mark, up at least to 1990.

The model is clearly invalid
and the numbers derived from it are WRONG

This post is already long enough. I’ve written more about science not being truth and related matters in Fatal Attractions: The Troubles with Science, New York: Paraview Press , 2001 . In another post I’ll write more specifically about this latest CDC publication, the array of unvalidated underlying assumptions as well as hints of troubling conflicts of interest.

Posted in clinical trials, experts, Funds for HIV/AIDS, HIV/AIDS numbers, M/F ratios, uncritical media | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »