HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

CDC versus CDC: Which Data to Believe?

Posted by Henry Bauer on 2008/08/15

I’ve commented critically, on numerous occasions, in many connections, on the fallacy of accepting outputs from computer models as though they were reliable data. I’ve also noted on several occasions that the so-called “Surveillance Reports” published by the Centers for Disease Control and Prevention (CDC) have increasingly — since the late 1990s — featured estimates rather than reported numbers (for example, see Table 33, below, from The Origin, Persistence and Failings of HIV/AIDS Theory, and the following pages in the book).

Another egregious example of estimates taking the place of reported numbers turned up as I was looking into information about deaths from “AIDS” (= “HIV disease”). That led me to remember that bureaucracies are ill suited to doing, assessing, managing, or reporting matters scientific: bureaucracies are not good at self-criticism; internal disagreements are wherever possible hidden from outsiders and settled by political rather than scientifically substantive negotiations. That’s part of the reason why 21st-century science is becoming riddled with knowledge monopolies and research cartels.

The Centers for Disease Control and Prevention is a sizeable bureaucracy. Some 16 units report to the Director:


Within the Coordinating Center for Infectious Diseases reside four National Centers, for:
— Immunization and Respiratory Diseases (NCIRD)
— Zoonotic, Vector-Borne, and Enteric Diseases (NCZVED)
HIV/AIDS, Viral Hepatitis, STD, and TB Prevention (NCHHSTP)
— Preparedness, Detection, and Control of Infectious Diseases (NCPDCID)

NCHHSTP houses a variety of programs under 6 “topics”:
— Sexually Transmitted Diseases
HIV/AIDS
— Viral Hepatitis
— Tuberculosis
— Global AIDS
— BOTUSA (Botswana-USA).
[That “HIV/AIDS” and “Sexually Transmitted Diseases” are separate “topics” does not, regrettably, mean that the CDC has now acknowledged that HIV/AIDS is not sexually transmitted.]

Within (presumably) the “HIV/AIDS” topic is the Division of HIV/AIDS Prevention, which has published HIV/AIDS Surveillance Reports.

Within the Coordinating Center for Health Information and Service (CCHIS) reside three National Centers:
Health Statistics (NCHS)
— Public Health Informatics (NCPHI) (has 5 divisions)
— Health Marketing (NCHM)
[For anyone who is not squeamish about bureaucratic and PR jargon, I recommend highly the explanation of what “health marketing” is (and if you can explain what the explanation means, please let me know)]

Evidently the publishers of the HIV/AIDS Surveillance Reports are quite a few bureaucratic steps away from the National Center for Health Statistics, which publishes the National Vital Statistics Reports (NVSR) and annual summaries of Health, United States (HUS). Perhaps that explains why the data in the Surveillance Reports differ so much from those in NVSR and HUS.

Take the instance of deaths in 2004 from “HIV disease”.

NVSR 56 #5, 20 November 2007, using “information from all death certificates filed in the 50 states and the District of Columbia”, lists by age group (in its Table 1) the numbers of recorded deaths, and the death rates per 100,000, for the ten leading causes of death in each group. “Human immunodeficiency virus (HIV) disease” appears as one of those ten leading causes only between ages 19 and 54. There are listed 160 deaths among 20-24-year-olds, 1468 deaths among ages 25-34, 4826 deaths among ages 35-44, and 4422 deaths among ages 45-54.

However, numbers for some of the other age groups can be calculated because the death rates for them are supplied in Health, United States, 2007 — With Chartbook on Trends in the Health of Americans (National Center for Health Statistics, Hyattsville, MD: 2007). Appendix I confirms what is said in NSVR: “Numbers of . . . deaths from the vital statistics system represent complete counts . . . . Therefore, they are not subject to sampling error”. Table 42 [also featured in an earlier post, HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008] is for deaths from HIV disease:

* Rates based on fewer than 20 deaths are considered unreliable and are not shown.

(Note again, under the heading of Table 42, “Data are based on death certificates”.)

These rates allow calculation of actual numbers of HIV-disease deaths for age groups from 5 through 84 years of age (column F, Table I below), because the NVSR gives not only numbers but also the corresponding rates for each age group, allowing calculation of the factor connecting rate and number, see column D. (The factor is independent of the particular disease but varies with age: it reflects how many individuals are within that age group in the whole population.) Together with the numbers already given in NVSR, this yields numbers of deaths for the whole range from 5 to 84 years of age, column G.

Now compare those numbers with the estimates published in Table 7 of HIV/AIDS Surveillance Report, volume 18, “Cases of HIV infection and AIDS in the United States and Dependent Areas, 2006”, presenting data “reported to CDC through June 2007”) :

For 2004, here is a comparison of the numbers from these two sources within CDC:

The estimates from the CDC are on average 21% greater than the actually recorded numbers. Moreover, the error varies with age group in a remarkably regular way; one that exaggerates the median age of death by more than 3 years.

Now, Table 7 in the Surveillance Report does have this caveat, in small print in a footnote to the Table: “These numbers do not represent reported case counts. Rather, these numbers are point estimates, which result from adjustments of reported case counts. The reported case counts have been adjusted for reporting delays and for redistribution of cases in persons initially reported without an identified risk factor, but not for incomplete reporting” [emphasis added]. Incomplete reporting for 2004 should hardly be a problem, however, in a publication that presents data “reported to CDC through June 2007”; nor would incomplete reporting vary with age group in this remarkable manner, it would be more random.

Such “adjustments” 3 and 4 years after the event are no rarity in these CDC HIV/AIDS publications. For example, deaths “reported” for the 1980s were “adjusted” downwards in wholesale fashion more than half-a-dozen years later, thereby altering the fact that the earlier data had shown deaths to have been leveling off, see Table 33, p. 221 in The Origin, Persistence and Failings of HIV/AIDS Theory:

Note how “reported” deaths for the years through 1986 somehow decreased dramatically between the 1988 report and the 1989 report. Such re-writing of historical facts will be familiar to students of the former Soviet Union, but it is not normally found in scientific publications.

At any rate, CDC unapologetically—indeed, without admitting it or drawing attention to it—routinely publishes considerably revised “estimates”; for example (Table III), for deaths in 2002 as given in the 2005 and 2006 Surveillance Reports. Table 7 in the 2006 Report does not warn that numbers for as far back as 2002 are different from those for the same years in the 2005 Report.

The Technical Notes do warn: “Tabulations of deaths of persons with AIDS (Table 7) do not reflect actual counts of deaths reported to the surveillance system. Rather, the estimates are based on numbers of reported deaths, which have been adjusted for delays in reporting”.

The estimates may be based on reported deaths; but if so, then they are very loosely based on them indeed, since they differ by as much as 38% in some age groups, see Table II above. That adjustments from one year to the next are so similar in percentage terms for the various age groups (Table III); that the differences between actual counts and “estimates” vary in such regular fashion with age (Table II); and that the numbers given are “point estimates” all indicate that the estimates are arrived at by means of some sort of overarching algorithm, computer model, or graphical representation, with—presumably—periodic adjustment of some of the assumptions or parameters defining the model. However, when estimates, no matter how derived, are claimed to be “based on numbers of reported deaths”, one expects that the mode of estimating will be progressively refined over the years to bring the estimates closer to the actual numbers. That has evidently not been the case here: estimated “data” for deaths for 2004 are shockingly different from the reports based on death certificates (Table II).

Once again—or rather, as usual—HIV/AIDS “researchers” imply greater accuracy than is warranted. The “point estimates” in Table II differ from year to year by a couple of percent, so the numbers should never be written to more than 3 significant figures. When they differ from actual numbers as much as in Table III, even two significant figures give a false impression.

The overall description at the beginning of the Surveillance Report is also misleading: “Data are presented for cases of HIV infection and AIDS reported to CDC through June 2007. All data are provisional.” Nothing here about “estimates”, and the reader who scans without careful attention to fine-print footnotes and Technical Notes could easily believe—given that numbers are given to four and five significant figures—that these really are “reported” “data”, not computer garbage-output emanating from invalid models. Nor are readers referred to NVSR or HUS; the only mention of either is in the Technical Notes and does not refer to Table 7: “The population denominators used to compute these rates for the 50 states and the District of Columbia were based on official postcensus estimates for 2006 from the U.S. Census Bureau [24] and bridged-race estimates for 2006 obtained from the National Center for Health Statistics [25].”

Why would one publish estimates when actual numbers are reported by a sibling unit in the same bureaucracy? After all, death certificates are a legal requirement, and information from them should be as trustworthy as demographic data ever can be. Is it coincidental that the HIV/AIDS specialists always overestimate?

7 Responses to “CDC versus CDC: Which Data to Believe?”

  1. Martin said

    Hi Dr. Bauer, Great posting!
    Let’s say that the guys (and gals) who put the (so-called) information on HIV/AIDS together are honest. They believe that AIDS is a real, deadly, contagious disease. They are aware of the actual reported data and they might say to themselves: “I believe that the deaths due to HIV/AIDS are under reported.” So they may think they are doing a service by putting together estimates that they believe reflect the “truth”. Afterall, a contagious disease that doesn’t have a cure shouldn’t be going down that much. Look at Africa — their population is being “decimated”! Obviously cognitive dissonance is at work here. As has been said before when people see something that doesn’t fit their preconceived notions, they just make it up. Now one of the things that always bothers me is when the cause of death is reported as “from complications due to AIDS”. Really? I’ve always been skeptical about that kind of report because I like to know what actual diseases this person succumbed to. For instance, one of the most famous deaths from “AIDS” was Ryan White. Reporting his death as due to internal bleeding (Hemophilia) and liver failure (AZT poisoning) wouldn’t do — the New York Times reported his death as due to complications from AIDS. So I question even the statistics from the CDC reporting what was on the death certificates.

  2. Henry Bauer said

    Martin:

    Yes, the reporting is done within the HIV/AIDS belief. I found, for example, where “HIV” was listed as “underlying cause” in death of a diabetic who was HIV+. Rebecca Culshaw found that Massachusetts records as AIDS deaths everyone who was known to be HIV+, even if they died in a traffic accident. States benefit from federal largesse in proportion to the number of AIDS cases they can report, which is an incentive to swell the numbers in every possible way; for example, see TRIMMING FACTS, INVENTING EPIDEMICS, 14 January 2008
    https://hivskeptic.wordpress.com/2008/01/14/trimming-facts-inventing-epidemics/.

    My interpretation of the data is that HIV doesn’t lead to death, but death — or what will eventually bring it on — may well cause someone to test HIV+. I said a bit along these lines in “HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008,
    https://hivskeptic.wordpress.com/2008/03/19/“hiv-disease”-is-not-an-illness/. And I’ll be writing more directly about it in future.

    But the fact that two sections within CDC put out starkly different “data” for the same phenomenon should trouble even people who cling to HIV/AIDS dogma.

  3. Martin said

    Apparently true believers can deal with contradictory information. This isn’t medicine or science. Did they even try to explain? Maybe they were hoping that their GIGO was just another study buried in all the other piles of GIGO and that no one except the denialists would care.

    Here’s a quote from Thomas Szasz’s The Meaning of Mind (p. 126):

    Some observations obtained in the course of recent neuroimaging studies of schizophrenics support the interpretation I am suggesting. Let us recall that Julian Jaynes claimed that the experience of hearing voices (auditory hallucination) is “just like hearing actual sound.” If that were so, the cerebral-physiological processes accompanying the hallucinating person’s experience would be similar to those accompanying normal hearing; which is exactly what researchers using neuroimaging technics to study brain activation in hallucinating patients expected to find. Instead, they found changes in the region of the brain activated during speaking. “Broca’s area is a surprise,” commented Jerome Engel, a neurologist at the University of California at Los Angeles, “since that’s where you make sounds, not where you hear them. I would have expected more activity in Wernicke’s area, which is where you hear.”
    Neuroscientists do not interpret this finding as supporting the view that the person who claims to be hearing voices is disavowing his (aggressive, erotic, grandiose) thoughts. Instead, they interpret it as evidence that schizophrenia is a brain disease. Thus, P. K. McGuire and his associates continue to refer to “brain regions involved in the production of hallucinations,” as though speech were a product of the brain, rather than of the person; speculate that their “observations are suggestive of a disruption of the cortico-cortical connectivity, which is thought to be a critical feature of the neuropathology of schizophrenia”; and conclude that hearing voices is “caused by a disordered monitoring of inner speech.”

  4. Macdonald said

    Prof Bauer, Martin,

    From the “Improving HIV Estimates” section in the latest global UNAIDS report:

    “. . . national population-based HIV surveys have found HIV prevalence to be approximately 20% lower than the prevalence among antenatal clinic attendees, in both rural and urban areas (Gouws et al., in press)”.

    This is what led to last year’s drastic down-revision of global HIV-prevalence estimates, and the 20% figure is already considered so reliable that it’s become part of the models, even for countries that don’t have national population-based surveys:

    “Some countries in sub-Saharan Africa have not conducted such surveys—notably Angola, Eritrea, Gambia, Guinea-Bissau, Mozambique, Namibia, Nigeria, Somalia, and Sudan. To develop the estimates included in this report, HIV prevalence data from antenatal clinic attendees in these countries have been adjusted downward to a level of approximately 0.8 times the prevalence found in antenatal clinic surveillance. The level of adjustment varies, based on the proportion of urban to rural populations within a country.”

    Here’s another, perhaps less well-know “miscalculation”, which I imagine would make Duesberg chuckle:

    “New research has also led to important revisions in the assumptions used in the models developed by UNAIDS and WHO. One such revision relates to estimates of HIV incidence and AIDS mortality. Central to these is an assumption about the average time people survive from HIV infection to death in the absence of antiretroviral treatment (Stover et al. in press). Longitudinal studies (Marston et al., 2007; Todd et al., 2007) indicate that, in the absence of such treatment, the estimated net median survival time after infection with HIV is 11 years (UNAIDS Reference Group on Estimates, Modelling and Projections, 2006), instead of the previously estimated 9 years (UNAIDS Reference Group on Estimates, Modelling and Projections, 2002).”

    One is reminded of Duesberg’s remark that the latency period assumed for HIV increases about one year with every passing year. Remarkably, 25 years into the HIV/AIDS epidemic, UNAIDS now explicitly calls life expectancy in “natural” HIV infection just that: an assumption.

    A two-year miscalculation of median survival time corresponds nicely to the 20% by which HIV prevalence was overestimated in the antenatal surveillance programs. And these are just two core assumptions out of dozens.

    Apart from that, it’s astounding that HIV has had the world’s eyes and wallets steadily focused on it for 25 years now, but the researchers still don’t know how long it takes for it to kill in various settings, or why.

    For instance, Marston et al., one of the studies informing UNAIDS’ report, has concluded that South African coal miners survive “HIV-infection” 4 years longer than Thais. The authors’ explanation for this discrepancy:

    “Survival appears to be significantly worse in Thailand where other, unmeasured factors may affect
    progression”.

    To be fair, the authors may have more explicit suggestions in the body of the text, but the abstract doesn’t hold out much hope for anything beyond guesswork. That is simply the stuff computer models are made of.

    http://data.unaids.org/pub/GlobalReport/2008/jc1510_2008_global_report_pp29_62_en.pdf

    http://www.aidsonline.com/pt/re/aids/abstract.00002030-200711006-00008.htm;jsessionid=LnQXCqyP3yn7pVdn2ZMWF8TpdBwPCcMQhypMykG8QTcLKlSbTGfm!-927161468!181195628!8091!-1

  5. Henry Bauer said

    MacDonald:

    Many thanks:

    1. Pregnant women are 25% more likely to test HIV+: confirms that “HIV+” is a non-specific marker of physiological stress.

    2. HIV+ South African coal miners live longer than Thais: HIV+ people of African ancestry die at older ages than others also in the United States, see Table A in “HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008.
    https://hivskeptic.wordpress.com/2008/03/19/“hiv-disease”-is-not-an-illness/
    Reason: African ancestry is positively associated with tendency to test HIV+. Some fraction of dying people test HIV+, because that’s a marker of physiological stress and approaching death is often accompanied by physiological stress. Therefore people of African ancestry test HIV+ more often than others at all ages, old ages as well as young.

  6. Macdonald said

    Here’s another discrepancy courtesy, one presumes, of compartmentalized HIV science.

    It is estimated that there are between 600,000 and 1 million annual needlestick injuries in the US. It is safe to assume that most of the cases suspected of involving HIV-nfected needles are reported or traced back, but let us settle for the low figure, 600,000, in the following calculation.

    It is further estimated that 2% of all needle-stick injuries involve HIV-contaminated needles. That gives us 12,000 annual needle-stick accidents presenting a risk of infection.

    The actual risk of acquring HIV from any needle-stick injury is put, by prospective studies, at 1 in every 300. This gives us 40 annual HIV infections in the US. Forty times twentyfive (years) is 1000 infections.

    Optional post-exposure prophylaxis was made available from quite early on, and one 1995 study has suggested that this might be as much as 79% effective in avoiding infection if administered in a timely manner. Let us say post-exposure prophylaxis has reduced the total number of infections by 70%.

    That gives us 300 cases of HIV infection from needle-stick injuries.

    The number of confirmed cases of occupationally acquired HIV infection (occupationally acquired includes percutaneous exposure) was by 2006 a grand total of 57. If we include all “possible cases”, the figure rises to 197. By no stretch is it possible to make the studies estimating the infectiousness of HIV and the actual numbers meet.

    Lest we should think that the US constitutes a statistical fluke, avert.org informs that the grand total of occupationally acquired HIV infections in the UK as of Feb. 2008 was a whopping 5 cases.

    http://www.avert.org/needlestick.htm

  7. CathyVM said

    According to the CDC, about half of health-care workers do not complete a full
    month of post-exposure prophylaxis due to high levels of toxicity experienced
    by 75% of them (MMWR 30th Sept 2005).
    Put that in context, health-care workers, presumably well aware of the horror
    that is AIDS, choose to risk the disease rather than be "sick" for one
    month. If you factor in this "non-compliance", the number of "infections"
    should be closer to 600.
    The full prescribing data-sheet for Ritonavir states that less than one in ten
    people suffer the more common side-effects like diarrhoea, abdominal pain and
    nausea, but a pharmacokinetic study in healthy volunteers found 93% reported those
    effects. Effects they found so distressing (along with many subjects developing
    toxic liver changes) that the study was discontinued after only 9 days [1]. While
    the title suggests they received unusually high dosages, they are the dosages recommended
    on the prescribing data sheet.
    So "healthy" people tolerate the drugs less well than those at whom
    the bone has been well and truly pointed? While I don’t doubt that people who
    are convinced they have a deadly disease may put up with more discomfort in the
    real world, these are studies that specifically ask the subjects what effects
    they are experiencing. The huge discrepancy between these numbers suggests that
    somebody is telling lies.

    1. Shelton, M.J., et al., Pharmacokinetic and safety evaluation of high-dose
    combinations of fosamprenavir and ritonavir. Antimicrob Agents Chemother, 2006.
    50(3): p. 928-34.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s