HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘HIV/AIDS deaths and age’

Deaths from “HIV disease”: Why has the median age drifted upwards?

Posted by Henry Bauer on 2009/02/18

In “HIV, AIDS, and age: HIV/AIDS theory is wrong” [23 January 2009], I pointed to the similarity of median ages for HIV tests, for “new infections”, for AIDS diagnoses, and for deaths from “HIV disease”, arguing that this contradicts the widely accepted theory that, after a “latent period” of about 10 years, HIV causes AIDS and eventual death. Not only that: all those ages are within the range of 35-45 years, whereas the maximum risk of incurring sexually transmitted infections is at ages that are younger by a couple of decades and the maximum risk of death from any illness or disease rises at an exponential rate with increasing age from around 30. Furthermore, the age distributions of deaths are narrower than the others [No HIV “latent period”: dotting i’s and crossing t’s, 21 September 2008], the very opposite of what HIV/AIDS theory demands. That theory is simply wrong, on these several counts as well as others, for example, that “HIV” tests in the United States demonstrate that those tests do not track something infectious [The Origin, Persistence and Failings of HIV/AIDS Theory].

All the mentioned median ages are similar to one another, but all of them (and of course the age distributions) have drifted upward over the years. Defenders of the orthodoxy have claimed that the upward drift in median age of death from “HIV disease” illustrates a life-extending benefit from increasingly better treatment over the years, for example:

“The median age at death due to HIV disease increased almost linearly from 36 years in 1987 to 39 years in 1995, and to 45 years in 2005. This is a reflection of the postponement to older ages of HIV-attributable deaths that were not entirely prevented by improved treatment. The median age at death due to HIV disease varied little by racial/ethnic groups”.

That statement is an annotation to Figure 1:


This suggested explanation for the upward drift of the median age of deaths ignores that the median ages have also drifted upward (1) for first positive “HIV” tests, (2) for new diagnoses of “AIDS”, and therefore, (3) of surviving PWA:


The actual numbers for the data in Figure 2 are given in Table 1:

I don’t know why the CDC graph begins at 1987 rather than 1982, which was the first year for which CDC published deaths by age group. In any case, there is not an “almost linear” increase since 1987, as the numbers in Table 1 show. From 1982 to 1987, the increase was ~0.16 years per year (YPY).  If one chooses the interval from 1982 to ca. 1990, it was a bit bigger, ~0.2 YPY; to the mid-1990s, it was a bit bigger again, approaching 0.3 YPY; for the whole period 1982 to 2004, it was ~0.42 YPY. In other words, the rate of upward drift was increasing over the years; and it was already ~0.16 YPY before there was any antiretroviral therapy. For the pre-HAART, largely AZT period 1987 to 1993, the upward drift was ~0.37; for the HAART era (1996 on), it’s been  ~0.64 YPY.

One might them be tempted to credit AZT with ~0.21 YPY benefit (0.37 – 0.16), and HAART with an additional ~0.27 YPY (0.64 – 0.37) with respect to “deaths that were not entirely prevented by improved treatment”. But even so minimal a claim ignores the fact that the ages at which people were testing positive, and the ages at which they were being diagnosed as having AIDS, had also been drifting upward at somewhat comparable rates. The population of PWA is selected by AIDS diagnoses. As that population ages, deaths among that group will also occur at increasingly older ages (just as among the population at large, life expectancy increases with age among those surviving to any given age).

The upward drift in median age of those testing “HIV-positive” was ~0.4 YPY between 1995 and 2004, among the roughly 2 million people tested annually. That was not significantly owing to any trend to test older people, for the median age of those being tested increased by only ~0.1 YPY in that period, leaving ~0.3 YPY attributable to whatever was contributing to an upward drift in age of those testing “HIV-positive”. That might be owing to changes in the tests themselves or to changes in the nature of the tested population, or of course both.

Now, the tendency to test “HIV-positive” varies somewhat by sex and race, and so does the median age at which positive tests are most likely. As it happens, the median ages at which blacks test “HIV-positive” are about a year higher than the median ages at which whites test “HIV-positive”; and for both blacks and whites, the median age at which men test positive most often is a couple of years higher than the age at which women test positive most often:


Now, the proportions as to sex and race of those being tested changed over the years. In 1995, 1.39 million females were tested and 1.10 million males, a ratio of 1.25, 25% more females than males, whereas in 2004 it was 947,000 females and 933,000 males, a ratio of only 1.02, almost equally males and females. Since the proportion of men among those tested increased significantly, and the median age for testing positive is greater for men than for women (by about 2 years, see Table 2), the overall median age for testing positive drifted upward solely because of the changing composition of PWA in terms of men and women.

Furthermore, the proportion of blacks among those tested increased significantly between 1995 and 2004: in 2004, only 36% of those tested at public sites were white and 39% black, whereas in 1995 it had been 49% white and only 33% black. That change is consistent with and perhaps a consequence of the increasingly pervasive shibboleth that HIV/AIDS has, in the United States, become a disease of the black community, spurring concerted efforts for more comprehensive testing in that community. Since blacks test positive at median ages greater by about a year (Table 2) than the median age at which whites test positive, the changing composition of PWA in terms of race adds a further upward drift in the median age of those testing positive, and thereby of the PWA population as a whole.

The most significant change in the population of PWA, however, was one that began in 1993, when the definition of “AIDS” was broadened to include healthy, asymptomatic individuals if they were “HIV-positive” and had CD4 counts below 200. Thus the average level of health among PWA was improving since 1993, and one would expect to see the median ages of those dying among that group to drift upward at an increasing rate. That’s what the data indeed show: the rate of increase was ~0.3 YPY from 1982 to 1993, and since 1999 it’s been ~0.6 (an international re-definition of “AIDS” in 1998 makes numbers from 1997-99 less comparable to others).

So the median age of death from “HIV disease” experienced an upward drift because the median age of the population of PWA, from which those deaths are drawn, drifted upward owing to the changing composition of the population of PWA in terms of age and sex. In other words, the changing composition of the population of PWA contributed indirectly to an upward drift in median age of death.

In addition to that, however, the changing composition of the population also contributed directly to an increase in the median age of death: “HIV disease” mortality among blacks is greater than among whites, and blacks also die at older ages (Table 3).

Data for intermediate years are quite similar (before 1999, reports were not in the same format). At any rate, compare male whites with male blacks, and white females with black females: in both cases, the maximum rate of death is at an older age among blacks than among whites, and in both cases the maximum rates of death are considerably greater among blacks than among whites — by a factor of ~7 with males and ~12 with females.

Since the proportion of blacks among PWA increased over the years, and since black PWA died at a greater rate than white PWA, and at older ages, the overall median age of deaths “from HIV disease” drifted upward even more than the upward drift predicted solely by the greater proportion of blacks entering the population of PWA.

So: Does the upward drift in age of death reflect some lingering benefit from HAART, as the Centers for Disease Control and Prevention suggested?

Not at all. The changing population of PWA and the different characteristics of blacks and whites suffice to explain the upward drifts of median ages of death as well as of PWA.

Posted in antiretroviral drugs, HIV and race, HIV tests, HIV varies with age, HIV/AIDS numbers, M/F ratios | Tagged: , , , | 6 Comments »


Posted by Henry Bauer on 2008/03/19

This post is longer than I prefer, but I saw no good way to split it into parts. It explains that the way “HIV infections” and deaths from “HIV disease” vary with age and with race and over time constitutes a resounding disproof of HIV/AIDS theory.


A couple of years ago, I had come to the conclusion that the demographics of positive “HIV”-tests, data published largely by the Centers for Disease Control and Prevention (CDC), represent definitive proof that “HIV” is not an infection. Icing on that cake is the fact that “HIV” and “AIDS” are not correlated—again, in officially published statistics—, as became clear to me while writing The Origin, Persistence and Failings of HIV/AIDS Theory (see chapter 9). Now I’ve found that a more direct line of proof lies in comparing the data on deaths from “HIV disease”—as the CDC has come to call it—with data from “HIV” tests.

In earlier blogs, I had argued that “HIV disease” is not an illness, citing among other things Table A below (see WORLD AIDS DAY . . ., 22 December 2007; “HIV DISEASE”, 28 December 2007; HOW TO TEST THEORIES . . ., 7 January 2008).

TABLE A (click in table for full size)

There I had waffled about how the racial disparities and sex differences in “HIV” deaths parallel those found on “HIV” tests, and how strange it is that blacks and Hispanics are more susceptible to “catching” HIV and yet survive to later ages than do whites or Asians or Native Americans equally suffering from “HIV disease”, and how all this supports the hypothesis that testing “HIV”-positive is a non-specific indication of some sort of physiological stress. But I had failed to grasp the significance of the fact that the age distribution of deaths from “HIV disease” reaches a maximum in people in the prime years of life, mid-thirties to early forties. That is the very opposite of how people react to infectious diseases, where everyone is about equally at risk of infection, but the young and the old are most at risk of succumbing to the infection, from pneumonia, say, or influenza; so the variation with age of “HIV” deaths is the very opposite of how death rates from infectious diseases vary with age; and for the same reason, it’s the very opposite of how all-cause death rates vary with age (Table B).

TABLE B (click in table for full size)


Even death rates from chronic diseases—diabetes, say—or “diseases of old age”—heart and cardiovascular, say, or cancer—show the same trend, though the death rates at very young ages are much less prominent:

TABLE C (click in table for full size)


The all-cause death rates of people in their thirties or forties are comparatively low, between ¼ and ½ of the age-adjusted overall death-rate (Table B, 193.5 or 427 compared to 800.8). Nowhere have I found mention of an illness that is most life-threatening for people aged 35-44 or 45-54—except, of course, “HIV disease”.

One might quibble that the numbers in Table A are not rates for each of the given age-groups; but adjusting for the age distribution in the population makes little difference, as shown by the age distribution of reported death-rates from “HIV disease” (Table D, which is Table 42, p. 236, in “National Center for Health Statistics: Health, United States, 2007 with Chartbook on Trends in the Health of Americans”, Hyattsville, MD, 2007) : for males as for females and in every calendar year, the highest rate of death from “HIV disease” comes at ages 35-44 with the single exception of females in 1987 when it came at 25-34.

TABLE D (click in table for full size)


 * in table D means rates based on fewer than 20 deaths, considered unreliable

The failure of HIV/AIDS theory is demonstrated not only by this incongruous age-dependence of death rates. Note how constant over the years is the shape of this age distribution. While the magnitudes of the rates go up from 1987 to 1995 and then down, they do so in similar fashion in each age group. By contrast, HIV/AIDS theory would have predicted high death-rates at relatively early ages in 1987 and before, when there were no treatments for AIDS and victims were dying within months, or at most a year or two, after diagnosis; then—HIV/AIDS theory would have it—the highest death-rates would have moved steadily to older ages as treatments were introduced, and particularly after the supposedly revolutionary introduction of “life-saving” HAART in the mid-1990s and the development of continually better individual drugs. But there is no such trend; the actual data show no change at all, over the years, in the age range within which people are most at risk of dying of “HIV disease”. For two decades, the greatest risk of dying from “HIV disease” has been experienced by people between 35 and 44years of age.

Also to be noted is that from 1987 into the mid-1990s, every age-group saw a great increase in death rates. That was the era of AZT monotherapy, initially deploying doses so high that even the mainstream acknowledged their toxicity by cutting them back drastically. Discontinuation of monotherapy in favor of “cocktails” then allowed the death rates to fall back again; but, as mentioned above, there is no indication at all that years of survival were increased by introduction of HAART as monotherapy was phased out.

(After writing this I was struck by a sinking feeling that, like increasing arrays of HIV/AIDS numbers issued by the CDC, Table D might have been drawn from computer models, which would explain their astonishing regularity. Then I noticed the phrase in fine print just below the Table’s header, “Data are based on death certificates”, and I was reassured —at least provisionally.)


That “HIV”-positive” is not an illness is, of course, the reason that African Americans survive “HIV disease” to later ages than do white, Asian, and Native Americans (Table A), one of the points to which I had drawn attention earlier (7 January). Black people test “HIV”-positive more often than others under all circumstances and in both sexes and at all ages (The Origin, Persistence and Failings of HIV/AIDS Theory, Figures 13-17, pp. 53-6), so when they die they still test positive more often at every age, even to an appreciable extent at ages where others test positive so rarely as not to show up in the statistics (above 55 for men and above 45 for women, Table A).

These variations with age of death rates from “HIV disease” run exactly as would be expected on the hypothesis that testing “HIV”-positive is a non-specific response by the immune system to some sort of physiological stress and that, for a given challenge to health, the strength of that immune response varies according to the capacity of the individual’s immune system (The Origin, Persistence and Failings of HIV/AIDS Theory).

From the teens into the “golden years”, external health challenges do not (on average, overall) vary systematically with age, so on average the variation with age of the tendency to test “HIV”-positive reflects the capabilities of the immune system, which tend to be at their best in the middle years of life:



Health challenges are considerably higher, though, at very early ages, because newborns experience the stress of birth and because young children meet many health challenges for the first time as their immune systems are just learning to cope with them. So the graph rises to the left not because the immune system is fully capable, as in the middle years, but because the stresses and health challenges encountered in those years are exceptionally great.

But why should deaths from “HIV disease” parallel the tendency to test “HIV”-positive in the middle years if that tendency represents a capable immune-system response?

Because of the manner in which the CDC defines “HIV disease”.

After “HIV” had become accepted as the cause of “AIDS”, an increasing number of diseases were included by the CDC as “AIDS-defining” just because a significant number of people with those diseases were reported as testing “HIV”-positive. As Rebecca Culshaw noted, this led to the extraordinary situation that the death from any cause of a person known to be “HIV”-positive would be reported as a death from “HIV disease”—even when the immediate cause of death was heart attack, liver failure, CMV infection, or even suicide, a car accident, or drowning (“Science Sold Out”, 2007, p. 30, citing Massachusetts Department of Health, 2002). (There may be a financial incentive to do this: federal funds to “fight HIV/AIDS” are apportioned to states and cities according to the perceived relative impacts of HIV/AIDS.)

Now: illness and death are in and of themselves often associated with positive “HIV”-tests (after all, they represent extreme challenges to health). Hospital patients (admitted for reasons not connected with HIV/AIDS) test “HIV”-positive at between 0.1 and 7.6% (The Origin, Persistence and Failings of HIV/AIDS Theory, Table 3 p. 25; Table 23 p. 81); and moreover the tests vary with age as in the diagram above (ibid., Table 26 p. 98); emergency-room patients tested at 5-6% (ibid., pp. 48, 85); “HIV”-positive rates in autopsies were reported in one instance as between 1.9 and 3.7% and increasing in proportion to the degree of death-causing trauma, and in another instance at 18% with no indications of AIDS (ibid., p. 85). Since even accident and trauma victims tend to test “HIV”-positive, as well as people ill for a wide variety of other reasons, there is then a definite probability that anyone who is seriously ill will test “HIV”-positive, and so anyone who dies for any reason may well have tested “HIV”-positive while in hospital, or may well do so in autopsy, with a probability of a few percent or more; that’s much higher than the “normal” rate in the US population as a whole, which is an order of magnitude lower at a few per thousand or less.

The maximum death-rates from “HIV disease” in 2004 (Table D) were 10.9 (per 100,000) at ages 35-44 and 10.6 at ages 45-54. The all-cause death-rates for those age groups were (Table B) 194 and 427 respectively. Thus deaths from “HIV disease” represented respectively 5.5% (10.9/194) and 2.5% (10.6/427) of all deaths in those age groups, quite comparable to the frequency of positive “HIV”-tests among non-AIDS hospital patients and emergency-room patients and in autopsies. Thus deaths from “HIV disease” are merely that fraction of all deaths in which the non-specific “HIV”-positive reaction happened to turn up in response to the health challenge that had caused the death.

So death rates from “HIV disease” parallel the age variation of “HIV” tests simply because all deaths of “HIV”-positives are called deaths from “HIV disease”, and because “HIV” tests are so highly non-specific as to react to many life-threatening conditions. And that is also why the age variation of death rates ascribed to “HIV disease” is (for chronic diseases) unlike or (for infectious diseases) opposite to the variation with age of death rates from every other malady.

Figure 1 is schematic, not quantitative. I had mentioned in connection with its first appearance (ibid., p. 26) that the actual “middle” age of the peak appears to vary somewhat with sex and with race. To compare the actual years of that peak on “HIV” tests with the peak years of “HIV” deaths, I wanted “HIV”-test data for the population as a whole, since the death-data in Table A are also for the population as a whole. The most appropriate data-sets are those, totaling nearly 10,000,000 tests, published in 1995-8 by CDC for all public testing-sites (clinics for TB, HIV, STD, drugs, family planning, prenatal care, and more, as well as prisons and colleges and some reports from private medical practices). Pooling the actual numbers for each of those four years and making the appropriate calculations delivers the following results:

TABLE E (click in table for full size)


The highlighted cells and the “XXX” overlap or straddle in 12 of 13 cases; there is a good quantitative correspondence between the ages of maximum probability of testing “HIV”-positive and the ages of maximum rate of dying from “HIV disease”. But under HIV/AIDS theory, infection by HIV is supposed to be followed by a “latent period” of about 10 years: the peak ages for deaths from “HIV disease” should be a decade or more later than the peak ages for “HIV” infection, rather than overlapping in the same age-ranges. Furthermore, the difference between age of “infection” and age of death should have increased during the years—from the mid-1990s on—when “life-saving” antiretroviral treatments supposedly extended the life spans of “HIV”-positive people by a significant amount. Yet in 2002-4 (Tables A and D), the peak ages for “HIV” infection and for deaths from “HIV disease” are virtually the same as the ages where infections were most common in 1995-8, even though most people “infected” in 1995-98 should have survived well beyond 2005-8!

All this is inexplicable under HIV/AIDS theory, whereas it comports perfectly with the alternative theory that testing “HIV”-positive denotes physiological stress.

HIV/AIDS theory lacks substantive legs to stand on. “HIV” is not any cause of illness. Testing “HIV”-positive signals the presence of some sort of challenge to health. The tendency to test “HIV”-positive depends on what the health challenge is, and on how strongly an individual tends to respond.

Posted in HIV absurdities, HIV and race, HIV as stress, HIV does not cause AIDS, HIV tests, HIV varies with age, HIV/AIDS numbers | Tagged: , , , , , , | 5 Comments »

%d bloggers like this: