HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘computer model’

Science Studies 103: Science, Truth, Public Policy — What the CDC should know but doesn’t

Posted by Henry Bauer on 2008/09/04

For decades, politicians have increasingly taken expert scientific advice as a guide to public policy. That’s wonderful in principle, but not necessarily in practice, because outsiders don’t understand that the experts’ advice is based on scientific knowledge, which is always tentative and temporary; scientific theories have a limited lifetime. It’s a widespread illusion, with seriously debilitating consequences, that “science” is synonymous with “truth”.

Not that anyone would, if asked, admit that they believe that science = truth; but observe how science is commonly talked about. If we want to emphasize that something really is so, we say it’s “scientific”. If we want to denigrate something as being untrue, we call it “unscientific”; or if we want to be nasty, we say it’s “pseudo-scientific”. “Tests have shown that …” somehow doesn’t seem convincing enough, so we say, “Scientific tests have shown …”, and then we need no longer fear any contradiction.

Policy makers ought to know that advice from scientific experts is fallible. Much public policy has to be based on judgments in situations where the facts are not certain, and decisions have to be made about possible benefits balanced by possible drawbacks. Therefore

the prime responsibility of technical experts
whose advice informs politicians
is to make as clear as possible
the uncertainties in what they think they know

Otherwise, policy is influenced by judgments made unconsciously by the technical experts, who may see the trees fairly well but who are usually ignorant about the forest.

For example, when the Centers for Disease Control and Prevention (CDC) publish estimates of something, their responsibility is to make plain, indeed to emphasize, the limits of uncertainty in those estimates. This the CDC singularly and repeatedly fails to do; for example, it issued a press release in 2005 that HIV infections has surpassed 1 million “for the first time”, when it had already released estimates of about 1 million throughout the previous twenty years (p. 1 in The Origin, Persistence and Failings of HIV/AIDS Theory).

This post was prompted by the brouhaha following CDC’s recent announcement that its earlier estimate of about 40,000 annual HIV infections had been too low, its new estimate being 56,000. CDC was hardly a shrinking violet with this revision: “The new data is [sic] scheduled for publication in the peer-reviewed Journal of the American Medical Association. The report’s release is meant to coincide with the opening Sunday of the biannual International AIDS Conference in Mexico City, Mexico.”
“Dr. Kevin Fenton, director of the CDC’s National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention”, proclaimed that “‘The fact that 56,000 Americans each year are contracting HIV for the first time is a wake-up call for all of us in the U.S.’ . . . [CDC] is now using technology capable of determining when someone was infected. The new method can indicate whether someone has been infected with HIV during the previous five months, rather than relying on statistical models. Diagnosis of HIV can occur years after infection”.

News accounts do not always reflect accurately, of course, what a speaker or a press release says, but in this instance it was evidently something that could easily lead a listener or reader to believe that some “new method” had supplanted “statistical models” — which is entirely untrue. A few media accounts did mention that this new number is simply a revised estimate, not a claim that the rate of HIV infections has been on the increase. What none of the media accounts that I have seen has pointed out is how fraught with assumptions and uncertainties this new estimate is, and how wrong are the conclusions of this “new method” when tested against reported “HIV” numbers from earlier years. Figure A shows what the CDC’s “new method”, combined with its computer-statistical model, “predicts” new HIV infections to have been since before the beginning of the AIDS era (source: Hall et al., “Estimation of HIV incidence in the United States”, JAMA, 300 [2008] 520-529).

Figure A

I’ve highlighted several clues to how uncertain all this is, though the clues are not hard to recognize. What is not at all uncertain, though, is that the estimates given in this Figure are totally at odds with data-based estimates of HIV infections during at least the first decade of the AIDS era.

From the Y-axis scale, Figure A yields the numbers in Table A, column I. Column II lists the AIDS deaths during the relevant periods (from “Health, United States”  reports ). Column III is the net estimated prevalence, namely, the cumulation of annual new infections in Column I minus the deaths. Column IV lists earlier estimates from official reports and peer-reviewed articles. The CDC’s “new method” combined with their computer-statistical model constitutes a drastic re-writing of history. And just as the Soviet Union rewrote history all the time without mentioning the old version — let alone explaining what was wrong with it —, CDC fails to mention the numbers it and peer-reviewed articles had propagated during the 1980s and 1990s.

Those earlier estimates in Column IV had been made in a quite straightforward manner. The actually measured rate of testing “HIV-positive” in various groups was multiplied by the size of each group. Military cohorts, blood donors, and Job Corps members were routinely tested. Sentinel surveys had been carried out in hospitals and a range of clinics, and special attention had been paid to sampling homosexual men and injecting drug users. The only uncertainty was in estimating the sizes of the various groups, but good information was available about most of those, and moreover there was a National Household Survey that provided a good check on what is typical for the general population overall. Persistently over two decades, the result was an approximately constant prevalence of something like 1 million.

That fact is incompatible, however, with HIV/AIDS theory, which insists that “HIV” somehow entered the USA in the late 1970s. Naturally, the CDC’s model incorporates that assumption even though it remains unproven and is incompatible with surveillance of HIV infections since 1985. Now CDC continues its attempt to shape public policy with numbers derived from a model whose validity is not merely uncertain, it’s demonstrably invalid.

That seems incredible, but Science Studies once again offers insight. The modelers know they are “just” modeling, trying to establish the best possible algorithms to describe what’s happening. In their publication, they scrupulously set out all the assumptions made in this latest set of calculations; indeed, almost the whole text of the article describes one assumption after another. The failure to discuss how incompatible the model is with data from 1985 through the late 1990s is, plausibly, because the authors are not even aware of those earlier publications — they’re working on this particular model, that’s all. The blame, if any, should be directed at the administrators and supervisors, whose responsibility it is to know something about the forest, and most particularly about the CDC’s responsibility to the wider society: not to arouse panic without good cause, for example; to ensure that press releases are so clear to lay people that the media will not misrepresent them. But CDC big-shots, like bureaucrats in other agencies, suffer inevitable conflicts of interest: they want to attract the largest possible funding and to gain the highest possible public appreciation, esteem, prestige. That’s why, in the early days of AIDS, the CDC had hired a PR firm to convince everybody that AIDS was a threat to every last person, even as they knew that it wasn’t (Bennett & Sharpe, “AIDS fight is skewed by federal campaign exaggerating risks”, Wall Street Journal 1 May, 1996, A1, 6.)

At any rate, this latest misleading of the public, seemingly not unintentional, is far from unprecedented. The crimes and misdemeanors of CDC models are legion; see, for example, “Numbers”, “Getting the desired numbers”, and “Reporting and guesstimating”, respectively p. 135 ff., p. 203 ff., and p. 220 ff. in The Origin, Persistence and Failings of HIV/AIDS Theory). Consider the instance in Table B of CDC-model output that was wildly off the mark. The modelers had seen fit to publish this, as though it were somehow worthy of attention, when the calculated male-to-female ratios for “HIV-positive” are completely unlike anything encountered in actual HIV tests in any group for which such data had been reported for the previous dozen years.

Not that WHO or UNAIDS models are any better, as James Chin — who designed and used some of them — has pointed out cogently (“The AIDS Pandemic”).  Jonathan Mann was one of the first international HIV/AIDS gurus, responsible for authoritative edited collections like “AIDS in the World II: Global Dimensions, Social Roots, and Responses” (ed. Jonathan A. Mann & Daniel J. M. Tarantola, Oxford University Press, 1996) . In that volume, the cumulative number of HIV infections in the USA is confidently reported as in Table C below. On the one hand, I give Mann et al. high marks for restricting themselves to 4 significant figures and avoiding the now-standard HIV/AIDS-researchers’ penchant for giving all computer outputs to the nearest person. On the other hand, their estimates are in total disagreement with those based on actual data obtained during the relevant years.

The CDC’s 2008 model is a bit closer than WHO’s to the data-based estimates, but it’s still wildly off the mark, up at least to 1990.

The model is clearly invalid
and the numbers derived from it are WRONG

This post is already long enough. I’ve written more about science not being truth and related matters in Fatal Attractions: The Troubles with Science, New York: Paraview Press , 2001 . In another post I’ll write more specifically about this latest CDC publication, the array of unvalidated underlying assumptions as well as hints of troubling conflicts of interest.

Posted in clinical trials, experts, Funds for HIV/AIDS, HIV/AIDS numbers, M/F ratios, uncritical media | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

CDC versus CDC: Which Data to Believe?

Posted by Henry Bauer on 2008/08/15

I’ve commented critically, on numerous occasions, in many connections, on the fallacy of accepting outputs from computer models as though they were reliable data. I’ve also noted on several occasions that the so-called “Surveillance Reports” published by the Centers for Disease Control and Prevention (CDC) have increasingly — since the late 1990s — featured estimates rather than reported numbers (for example, see Table 33, below, from The Origin, Persistence and Failings of HIV/AIDS Theory, and the following pages in the book).

Another egregious example of estimates taking the place of reported numbers turned up as I was looking into information about deaths from “AIDS” (= “HIV disease”). That led me to remember that bureaucracies are ill suited to doing, assessing, managing, or reporting matters scientific: bureaucracies are not good at self-criticism; internal disagreements are wherever possible hidden from outsiders and settled by political rather than scientifically substantive negotiations. That’s part of the reason why 21st-century science is becoming riddled with knowledge monopolies and research cartels.

The Centers for Disease Control and Prevention is a sizeable bureaucracy. Some 16 units report to the Director:

Within the Coordinating Center for Infectious Diseases reside four National Centers, for:
— Immunization and Respiratory Diseases (NCIRD)
— Zoonotic, Vector-Borne, and Enteric Diseases (NCZVED)
HIV/AIDS, Viral Hepatitis, STD, and TB Prevention (NCHHSTP)
— Preparedness, Detection, and Control of Infectious Diseases (NCPDCID)

NCHHSTP houses a variety of programs under 6 “topics”:
— Sexually Transmitted Diseases
— Viral Hepatitis
— Tuberculosis
— Global AIDS
— BOTUSA (Botswana-USA).
[That “HIV/AIDS” and “Sexually Transmitted Diseases” are separate “topics” does not, regrettably, mean that the CDC has now acknowledged that HIV/AIDS is not sexually transmitted.]

Within (presumably) the “HIV/AIDS” topic is the Division of HIV/AIDS Prevention, which has published HIV/AIDS Surveillance Reports.

Within the Coordinating Center for Health Information and Service (CCHIS) reside three National Centers:
Health Statistics (NCHS)
— Public Health Informatics (NCPHI) (has 5 divisions)
— Health Marketing (NCHM)
[For anyone who is not squeamish about bureaucratic and PR jargon, I recommend highly the explanation of what “health marketing” is (and if you can explain what the explanation means, please let me know)]

Evidently the publishers of the HIV/AIDS Surveillance Reports are quite a few bureaucratic steps away from the National Center for Health Statistics, which publishes the National Vital Statistics Reports (NVSR) and annual summaries of Health, United States (HUS). Perhaps that explains why the data in the Surveillance Reports differ so much from those in NVSR and HUS.

Take the instance of deaths in 2004 from “HIV disease”.

NVSR 56 #5, 20 November 2007, using “information from all death certificates filed in the 50 states and the District of Columbia”, lists by age group (in its Table 1) the numbers of recorded deaths, and the death rates per 100,000, for the ten leading causes of death in each group. “Human immunodeficiency virus (HIV) disease” appears as one of those ten leading causes only between ages 19 and 54. There are listed 160 deaths among 20-24-year-olds, 1468 deaths among ages 25-34, 4826 deaths among ages 35-44, and 4422 deaths among ages 45-54.

However, numbers for some of the other age groups can be calculated because the death rates for them are supplied in Health, United States, 2007 — With Chartbook on Trends in the Health of Americans (National Center for Health Statistics, Hyattsville, MD: 2007). Appendix I confirms what is said in NSVR: “Numbers of . . . deaths from the vital statistics system represent complete counts . . . . Therefore, they are not subject to sampling error”. Table 42 [also featured in an earlier post, HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008] is for deaths from HIV disease:

* Rates based on fewer than 20 deaths are considered unreliable and are not shown.

(Note again, under the heading of Table 42, “Data are based on death certificates”.)

These rates allow calculation of actual numbers of HIV-disease deaths for age groups from 5 through 84 years of age (column F, Table I below), because the NVSR gives not only numbers but also the corresponding rates for each age group, allowing calculation of the factor connecting rate and number, see column D. (The factor is independent of the particular disease but varies with age: it reflects how many individuals are within that age group in the whole population.) Together with the numbers already given in NVSR, this yields numbers of deaths for the whole range from 5 to 84 years of age, column G.

Now compare those numbers with the estimates published in Table 7 of HIV/AIDS Surveillance Report, volume 18, “Cases of HIV infection and AIDS in the United States and Dependent Areas, 2006”, presenting data “reported to CDC through June 2007”) :

For 2004, here is a comparison of the numbers from these two sources within CDC:

The estimates from the CDC are on average 21% greater than the actually recorded numbers. Moreover, the error varies with age group in a remarkably regular way; one that exaggerates the median age of death by more than 3 years.

Now, Table 7 in the Surveillance Report does have this caveat, in small print in a footnote to the Table: “These numbers do not represent reported case counts. Rather, these numbers are point estimates, which result from adjustments of reported case counts. The reported case counts have been adjusted for reporting delays and for redistribution of cases in persons initially reported without an identified risk factor, but not for incomplete reporting” [emphasis added]. Incomplete reporting for 2004 should hardly be a problem, however, in a publication that presents data “reported to CDC through June 2007”; nor would incomplete reporting vary with age group in this remarkable manner, it would be more random.

Such “adjustments” 3 and 4 years after the event are no rarity in these CDC HIV/AIDS publications. For example, deaths “reported” for the 1980s were “adjusted” downwards in wholesale fashion more than half-a-dozen years later, thereby altering the fact that the earlier data had shown deaths to have been leveling off, see Table 33, p. 221 in The Origin, Persistence and Failings of HIV/AIDS Theory:

Note how “reported” deaths for the years through 1986 somehow decreased dramatically between the 1988 report and the 1989 report. Such re-writing of historical facts will be familiar to students of the former Soviet Union, but it is not normally found in scientific publications.

At any rate, CDC unapologetically—indeed, without admitting it or drawing attention to it—routinely publishes considerably revised “estimates”; for example (Table III), for deaths in 2002 as given in the 2005 and 2006 Surveillance Reports. Table 7 in the 2006 Report does not warn that numbers for as far back as 2002 are different from those for the same years in the 2005 Report.

The Technical Notes do warn: “Tabulations of deaths of persons with AIDS (Table 7) do not reflect actual counts of deaths reported to the surveillance system. Rather, the estimates are based on numbers of reported deaths, which have been adjusted for delays in reporting”.

The estimates may be based on reported deaths; but if so, then they are very loosely based on them indeed, since they differ by as much as 38% in some age groups, see Table II above. That adjustments from one year to the next are so similar in percentage terms for the various age groups (Table III); that the differences between actual counts and “estimates” vary in such regular fashion with age (Table II); and that the numbers given are “point estimates” all indicate that the estimates are arrived at by means of some sort of overarching algorithm, computer model, or graphical representation, with—presumably—periodic adjustment of some of the assumptions or parameters defining the model. However, when estimates, no matter how derived, are claimed to be “based on numbers of reported deaths”, one expects that the mode of estimating will be progressively refined over the years to bring the estimates closer to the actual numbers. That has evidently not been the case here: estimated “data” for deaths for 2004 are shockingly different from the reports based on death certificates (Table II).

Once again—or rather, as usual—HIV/AIDS “researchers” imply greater accuracy than is warranted. The “point estimates” in Table II differ from year to year by a couple of percent, so the numbers should never be written to more than 3 significant figures. When they differ from actual numbers as much as in Table III, even two significant figures give a false impression.

The overall description at the beginning of the Surveillance Report is also misleading: “Data are presented for cases of HIV infection and AIDS reported to CDC through June 2007. All data are provisional.” Nothing here about “estimates”, and the reader who scans without careful attention to fine-print footnotes and Technical Notes could easily believe—given that numbers are given to four and five significant figures—that these really are “reported” “data”, not computer garbage-output emanating from invalid models. Nor are readers referred to NVSR or HUS; the only mention of either is in the Technical Notes and does not refer to Table 7: “The population denominators used to compute these rates for the 50 states and the District of Columbia were based on official postcensus estimates for 2006 from the U.S. Census Bureau [24] and bridged-race estimates for 2006 obtained from the National Center for Health Statistics [25].”

Why would one publish estimates when actual numbers are reported by a sibling unit in the same bureaucracy? After all, death certificates are a legal requirement, and information from them should be as trustworthy as demographic data ever can be. Is it coincidental that the HIV/AIDS specialists always overestimate?

Posted in HIV varies with age, HIV/AIDS numbers | Tagged: , , , , , , | 7 Comments »

More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death

Posted by Henry Bauer on 2008/07/12

HAART had supposedly saved at least 3 million years of life by 2003, thereby supposedly justifying the expenditure of $21 billion in 2006 from federal US government funds alone—how much more was disbursed or used by charities and other NGOs is not known. On examination, that claimed 3 million turned out to be 1.2 million: and since these are not lives but life-years, they represent the lives of perhaps 6% of AIDS victims [Antiretroviral therapy has SAVED 3 MILLION life-years, 1 July 2008;
HIV/AIDS SCAM: Have antiretroviral drugs saved 3 million life-years?, 6 July 2008]. Not so impressive after a quarter century of research costing >$100 billion.

Another more recently trumpeted claim of benefits from antiretroviral therapy is that the “excess mortality” ascribed to “HIV” has decreased substantially in the era of HAART (Bhaskaran et al. for the CASCADE collaboration, “Changes in the risk of death after HIV seroconversion compared with mortality in the general population”, JAMA 300 [2008]51-59). This article resembles the older one in its reliance on computer modeling to produce desired results; in addition, it displays astonishing ignorance of such HIV/AIDS basics as the latent period of 10 years between “infection” and illness; and it deserves a Proxmire Golden Fleece Award for discovering what was already known.

The methodology is described in laudable detail, which reminded me of the V-P who always got his requested budget because he submitted it as a computer print-out [Antiretroviral therapy has SAVED 3 MILLION life-years, 1 July 2008]; how many unqualified fools like me would rush in when Bhaskaran et al. talk of “the familiar Cox hazard ratio”, “Kaplan-Meier methods”, “Poisson-based model”, and use of Stata version 10 for the statistical analysis? Yet the weakness of the whole approach is separate from any possible technical flaws: assertions and assumptions are made that are demonstrably wrong. [Which is not to deny that specialists might well also question the applicability of any one or all of those mentioned techniques to this particular task. Specialists might also want more information than the statement that “The median duration of follow-up was 6.3 years (range, 1 day to 23.8 years), with 16 344 individuals (99%) having more than 1 month of follow-up” — what exactly does “follow-up” mean here? Were not all of these patients monitored throughout the study?]

Bhaskaran et al. ascribe to antiretroviral drugs the lower mortality in the HAART era compared to the pre-HAART era. It is at least equally plausible that this reduction in “excess mortality” was owing to the abandonment of high-dose AZT monotherapy. After all, deaths from AIDS in the United States about doubled from 1987 to 1990, and increased by more than another 50% from 1990 to 1995, dropping back then to 1987 levels (National Center for Health Statistics, Table 42, p. 236, in “Health, United States, 2007”; “HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008;, June 30, “Disproof of HIV/AIDS Theory”).

Bhaskaran et al. themselves admit—albeit only in by-the-way fashion in concluding comments—that their analysis is rotten at the core: “it is likely that HIV-infected individuals in our study differ from the general population in other ways”. Yes indeed! Or rather, it’s not that the studied group (HIV-positives) is “likely” to differ in multiple ways from the “control” group (HIV-negative general population), it’s a certainty that they do. On the mainstream view of HIV/AIDS, HIV-positive people have been exposed to health risks that others have not, bespeaking significant behavioral differences. On my view and that of many others, “HIV-positive” is—like a fever—an indication that the immune system has reacted against something or other, that HIV-positive people have been exposed to health challenges that HIV-negative people have not. So differences in mortality between these two groups may have nothing at all to do with “HIV”.

The gross ignorance of HIV/AIDS matters displayed in this article is illustrated by the statement, also by-the-way in the concluding comments, that “race/ethnicity are also likely to differ among HIV-infected persons”. How could these authors not know that “HIV” is found disproportionately among people of African ancestry?

Here is a further illustration of incredible ignorance of HIV/AIDS matters: “Interestingly, we found that by 2004-2006, the risk of death in the first 5 years following seroconversion was similar to that of the general population . . . further research will be needed before our finding of no excess mortality in the first 5 years of infection in 2004-2006 can be generalized beyond those diagnosed early in infection”.
Almost from the very beginning, one of the salient mysteries about the lentivirus (slow virus) HIV has been the “latent period” between presumed infection by HIV and the appearance of any symptoms of illness. That latent period is nowadays agreed to be about 10 years. Therefore there should be no excess mortality at all for an average of 10 years after infection among people not being treated with HAART, and of course for much longer if HAART staves off AIDS. Unless, of course, “HIV” is causing death in symptom-less people, so that deaths from “HIV disease” during the latent period are deaths without apparent cause. It seems unlikely that such a phenomenon would long have gone unnoticed. Here is a typical representation of the supposed progression from infection to illness and death:

The death rate shown during the putative latent period is flat and runs along the baseline.

All this makes the authors’ modest admission that “Our study has some limitations” more than a little inadequate. The many obvious deficiencies in this article, notably the ignorance of latent period, reflect unkindly not only on the authors but also on the journal, its editorial procedures, and the lack of competence or diligence of the “peer reviewers” who presumably were engaged to comment expertly on whether this deserved to be published. What on earth has happened to medical “science”? Or was it always so defective in such obvious ways?

As to Golden Fleece Awards, there is the finding that “those exposed through IDU at significantly higher risk than those exposed through sex between males”. Yes indeed, drugs are not good for you! But then it has been routine among HIV/AIDS experts to discount the risks of illegal drugs by comparison to those of “HIV”, to the extent that there are continuing campaigns to provide drug addicts with fresh, clean, needles; and occasional surprise is expressed that injecting drug users typically have health problems [COCAINE AND HEROIN AREN’T GOOD FOR YOU! — a Golden Fleece Award, 13 June 2008]. In the end, do seem to be aware of this: “It is unlikely that HIV infection is the only factor leading to increased mortality rates among those exposed through IDU” because of, among other things, “the direct risks of substance abuse”.

No less surprising (to Bhaskaran et al., that is) than the poorer health of drug addicts is the finding that older people are less able than younger people to stave off health challenges: “Older age at seroconversion was associated with a higher risk of excess mortality . . . there was a clear gradient of increasing risk of excess mortality with increasing age at seroconversion”.
In other words, the older you are when you “seroconvert”—become infected, according to mainstream views, or encounter some sort of health challenge, according to Perth-Group-type views—the more likely you are to succumb, compared to people of the same age who have not encountered the same challenge. Who would have thought it?

Yet another finding worthy of attention was that “Females were at consistently lower risk [of dying] than males”. On the one hand, even most lay people are aware that women have a greater life expectancy than men (in most countries and in all developed ones). On the other hand, might not this finding with respect specifically to “HIV-positive” have stimulated some thought among the authors, whether this means anything specifically with respect to “HIV-positive” as signifying infection by a virus?


Here, as so often, some of what I’ve written might appear to accept that HIV is infectious and causes illness. That is not so; I am merely pointing out that even on its own terms, the HIV/AIDS view would still be wrong about the claimed benefits of antiretroviral drugs: there is no evidence that they prolong life. At best, as Dr. Juliane Sacher has pointed out, they might bring a temporary benefit by acting as antibiotics, for they certainly are inimical to life.


ACKNOWLEDGMENT: I am grateful to Fulano de Tal (a commonly used pseudonym, compare “John Doe”) who pointed out that an earlier version of this post included speculations based on US data that are irrelevant here since the CASCADE study includes only European cohorts. I also added the graph in response to one of “Tal”‘s comments, because I was not able to put the graph into my response.

Posted in antiretroviral drugs, experts, Funds for HIV/AIDS, HIV absurdities, HIV and race, HIV as stress, HIV does not cause AIDS, HIV varies with age, HIV/AIDS numbers, M/F ratios | Tagged: , , , , , , , , , , , , , | 20 Comments »