This post is longer than I prefer, but I saw no good way to split it into parts. It explains that the way “HIV infections” and deaths from “HIV disease” vary with age and with race and over time constitutes a resounding disproof of HIV/AIDS theory.
A couple of years ago, I had come to the conclusion that the demographics of positive “HIV”-tests, data published largely by the Centers for Disease Control and Prevention (CDC), represent definitive proof that “HIV” is not an infection. Icing on that cake is the fact that “HIV” and “AIDS” are not correlated—again, in officially published statistics—, as became clear to me while writing The Origin, Persistence and Failings of HIV/AIDS Theory (see chapter 9). Now I’ve found that a more direct line of proof lies in comparing the data on deaths from “HIV disease”—as the CDC has come to call it—with data from “HIV” tests.
In earlier blogs, I had argued that “HIV disease” is not an illness, citing among other things Table A below (see WORLD AIDS DAY . . ., 22 December 2007; “HIV DISEASE”, 28 December 2007; HOW TO TEST THEORIES . . ., 7 January 2008).
TABLE A (click in table for full size)
There I had waffled about how the racial disparities and sex differences in “HIV” deaths parallel those found on “HIV” tests, and how strange it is that blacks and Hispanics are more susceptible to “catching” HIV and yet survive to later ages than do whites or Asians or Native Americans equally suffering from “HIV disease”, and how all this supports the hypothesis that testing “HIV”-positive is a non-specific indication of some sort of physiological stress. But I had failed to grasp the significance of the fact that the age distribution of deaths from “HIV disease” reaches a maximum in people in the prime years of life, mid-thirties to early forties. That is the very opposite of how people react to infectious diseases, where everyone is about equally at risk of infection, but the young and the old are most at risk of succumbing to the infection, from pneumonia, say, or influenza; so the variation with age of “HIV” deaths is the very opposite of how death rates from infectious diseases vary with age; and for the same reason, it’s the very opposite of how all-cause death rates vary with age (Table B).
TABLE B (click in table for full size)
Even death rates from chronic diseases—diabetes, say—or “diseases of old age”—heart and cardiovascular, say, or cancer—show the same trend, though the death rates at very young ages are much less prominent:
TABLE C (click in table for full size)
The all-cause death rates of people in their thirties or forties are comparatively low, between ¼ and ½ of the age-adjusted overall death-rate (Table B, 193.5 or 427 compared to 800.8). Nowhere have I found mention of an illness that is most life-threatening for people aged 35-44 or 45-54—except, of course, “HIV disease”.
One might quibble that the numbers in Table A are not rates for each of the given age-groups; but adjusting for the age distribution in the population makes little difference, as shown by the age distribution of reported death-rates from “HIV disease” (Table D, which is Table 42, p. 236, in “National Center for Health Statistics: Health, United States, 2007 with Chartbook on Trends in the Health of Americans”, Hyattsville, MD, 2007) : for males as for females and in every calendar year, the highest rate of death from “HIV disease” comes at ages 35-44 with the single exception of females in 1987 when it came at 25-34.
TABLE D (click in table for full size)
* in table D means rates based on fewer than 20 deaths, considered unreliable
The failure of HIV/AIDS theory is demonstrated not only by this incongruous age-dependence of death rates. Note how constant over the years is the shape of this age distribution. While the magnitudes of the rates go up from 1987 to 1995 and then down, they do so in similar fashion in each age group. By contrast, HIV/AIDS theory would have predicted high death-rates at relatively early ages in 1987 and before, when there were no treatments for AIDS and victims were dying within months, or at most a year or two, after diagnosis; then—HIV/AIDS theory would have it—the highest death-rates would have moved steadily to older ages as treatments were introduced, and particularly after the supposedly revolutionary introduction of “life-saving” HAART in the mid-1990s and the development of continually better individual drugs. But there is no such trend; the actual data show no change at all, over the years, in the age range within which people are most at risk of dying of “HIV disease”. For two decades, the greatest risk of dying from “HIV disease” has been experienced by people between 35 and 44years of age.
Also to be noted is that from 1987 into the mid-1990s, every age-group saw a great increase in death rates. That was the era of AZT monotherapy, initially deploying doses so high that even the mainstream acknowledged their toxicity by cutting them back drastically. Discontinuation of monotherapy in favor of “cocktails” then allowed the death rates to fall back again; but, as mentioned above, there is no indication at all that years of survival were increased by introduction of HAART as monotherapy was phased out.
(After writing this I was struck by a sinking feeling that, like increasing arrays of HIV/AIDS numbers issued by the CDC, Table D might have been drawn from computer models, which would explain their astonishing regularity. Then I noticed the phrase in fine print just below the Table’s header, “Data are based on death certificates”, and I was reassured —at least provisionally.)
That “HIV”-positive” is not an illness is, of course, the reason that African Americans survive “HIV disease” to later ages than do white, Asian, and Native Americans (Table A), one of the points to which I had drawn attention earlier (7 January). Black people test “HIV”-positive more often than others under all circumstances and in both sexes and at all ages (The Origin, Persistence and Failings of HIV/AIDS Theory, Figures 13-17, pp. 53-6), so when they die they still test positive more often at every age, even to an appreciable extent at ages where others test positive so rarely as not to show up in the statistics (above 55 for men and above 45 for women, Table A).
These variations with age of death rates from “HIV disease” run exactly as would be expected on the hypothesis that testing “HIV”-positive is a non-specific response by the immune system to some sort of physiological stress and that, for a given challenge to health, the strength of that immune response varies according to the capacity of the individual’s immune system (The Origin, Persistence and Failings of HIV/AIDS Theory).
From the teens into the “golden years”, external health challenges do not (on average, overall) vary systematically with age, so on average the variation with age of the tendency to test “HIV”-positive reflects the capabilities of the immune system, which tend to be at their best in the middle years of life:
Health challenges are considerably higher, though, at very early ages, because newborns experience the stress of birth and because young children meet many health challenges for the first time as their immune systems are just learning to cope with them. So the graph rises to the left not because the immune system is fully capable, as in the middle years, but because the stresses and health challenges encountered in those years are exceptionally great.
But why should deaths from “HIV disease” parallel the tendency to test “HIV”-positive in the middle years if that tendency represents a capable immune-system response?
Because of the manner in which the CDC defines “HIV disease”.
After “HIV” had become accepted as the cause of “AIDS”, an increasing number of diseases were included by the CDC as “AIDS-defining” just because a significant number of people with those diseases were reported as testing “HIV”-positive. As Rebecca Culshaw noted, this led to the extraordinary situation that the death from any cause of a person known to be “HIV”-positive would be reported as a death from “HIV disease”—even when the immediate cause of death was heart attack, liver failure, CMV infection, or even suicide, a car accident, or drowning (“Science Sold Out”, 2007, p. 30, citing Massachusetts Department of Health, 2002). (There may be a financial incentive to do this: federal funds to “fight HIV/AIDS” are apportioned to states and cities according to the perceived relative impacts of HIV/AIDS.)
Now: illness and death are in and of themselves often associated with positive “HIV”-tests (after all, they represent extreme challenges to health). Hospital patients (admitted for reasons not connected with HIV/AIDS) test “HIV”-positive at between 0.1 and 7.6% (The Origin, Persistence and Failings of HIV/AIDS Theory, Table 3 p. 25; Table 23 p. 81); and moreover the tests vary with age as in the diagram above (ibid., Table 26 p. 98); emergency-room patients tested at 5-6% (ibid., pp. 48, 85); “HIV”-positive rates in autopsies were reported in one instance as between 1.9 and 3.7% and increasing in proportion to the degree of death-causing trauma, and in another instance at 18% with no indications of AIDS (ibid., p. 85). Since even accident and trauma victims tend to test “HIV”-positive, as well as people ill for a wide variety of other reasons, there is then a definite probability that anyone who is seriously ill will test “HIV”-positive, and so anyone who dies for any reason may well have tested “HIV”-positive while in hospital, or may well do so in autopsy, with a probability of a few percent or more; that’s much higher than the “normal” rate in the US population as a whole, which is an order of magnitude lower at a few per thousand or less.
The maximum death-rates from “HIV disease” in 2004 (Table D) were 10.9 (per 100,000) at ages 35-44 and 10.6 at ages 45-54. The all-cause death-rates for those age groups were (Table B) 194 and 427 respectively. Thus deaths from “HIV disease” represented respectively 5.5% (10.9/194) and 2.5% (10.6/427) of all deaths in those age groups, quite comparable to the frequency of positive “HIV”-tests among non-AIDS hospital patients and emergency-room patients and in autopsies. Thus deaths from “HIV disease” are merely that fraction of all deaths in which the non-specific “HIV”-positive reaction happened to turn up in response to the health challenge that had caused the death.
So death rates from “HIV disease” parallel the age variation of “HIV” tests simply because all deaths of “HIV”-positives are called deaths from “HIV disease”, and because “HIV” tests are so highly non-specific as to react to many life-threatening conditions. And that is also why the age variation of death rates ascribed to “HIV disease” is (for chronic diseases) unlike or (for infectious diseases) opposite to the variation with age of death rates from every other malady.
Figure 1 is schematic, not quantitative. I had mentioned in connection with its first appearance (ibid., p. 26) that the actual “middle” age of the peak appears to vary somewhat with sex and with race. To compare the actual years of that peak on “HIV” tests with the peak years of “HIV” deaths, I wanted “HIV”-test data for the population as a whole, since the death-data in Table A are also for the population as a whole. The most appropriate data-sets are those, totaling nearly 10,000,000 tests, published in 1995-8 by CDC for all public testing-sites (clinics for TB, HIV, STD, drugs, family planning, prenatal care, and more, as well as prisons and colleges and some reports from private medical practices). Pooling the actual numbers for each of those four years and making the appropriate calculations delivers the following results:
TABLE E (click in table for full size)
The highlighted cells and the “XXX” overlap or straddle in 12 of 13 cases; there is a good quantitative correspondence between the ages of maximum probability of testing “HIV”-positive and the ages of maximum rate of dying from “HIV disease”. But under HIV/AIDS theory, infection by HIV is supposed to be followed by a “latent period” of about 10 years: the peak ages for deaths from “HIV disease” should be a decade or more later than the peak ages for “HIV” infection, rather than overlapping in the same age-ranges. Furthermore, the difference between age of “infection” and age of death should have increased during the years—from the mid-1990s on—when “life-saving” antiretroviral treatments supposedly extended the life spans of “HIV”-positive people by a significant amount. Yet in 2002-4 (Tables A and D), the peak ages for “HIV” infection and for deaths from “HIV disease” are virtually the same as the ages where infections were most common in 1995-8, even though most people “infected” in 1995-98 should have survived well beyond 2005-8!
All this is inexplicable under HIV/AIDS theory, whereas it comports perfectly with the alternative theory that testing “HIV”-positive denotes physiological stress.
HIV/AIDS theory lacks substantive legs to stand on. “HIV” is not any cause of illness. Testing “HIV”-positive signals the presence of some sort of challenge to health. The tendency to test “HIV”-positive depends on what the health challenge is, and on how strongly an individual tends to respond.