HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘deaths attributable to AZT’

HAART saves lives — but doesn’t prolong them!?

Posted by Henry Bauer on 2008/09/17

Death rates are down, yet AIDS patients are not living longer! Why not?

(This is a long post, and includes at least one Table that is too large to be viewed conveniently in the same window as the text. If you prefer to read it as a pdf, here it is: haartdoesnt-prolong-lives)

In the early 1980s, a diagnosis of “AIDS” typically had been followed by death within a year or two. At that time, diagnosis was on the basis of Kaposi’s sarcoma or of manifest opportunistic fungal infections — Pneumocystis carinii pneumonia or candidiasis.

Following the adoption of “HIV-positive” as a necessary criterion for an AIDS diagnosis, an increasing range of non-opportunistic infections and other illnesses came to be included as “AIDS-defining” (for instance, tuberculosis, wasting, cervical cancer, etc.) — see Table 1; the most consequential changes were in 1987 and in 1993. The only basis for them was that people with some illnesses were quite often “HIV-positive”, in other words, there were correlations with “HIV-positive” status, not any proof that “HIV encephalopathy”, “HIV wasting disease”, or other additions to the list of “AIDS-defining” conditions were caused by “HIV”. Indeed, there could not be such proof since mechanisms by which “HIV” could cause illness have not been demonstrated, and they remain to this day a matter for speculation — even over the central issue of how HIV (supposedly) kills immune-system cells. An absurd consequence of these re-definitions, often cited by HIV/AIDS skeptics, is that a person suffering indisputably from tuberculosis (say) might or might not be classed as an HIV/AIDS patient, depending solely on “HIV” tests.

Table 1

(from Nakashima & Fleming, JAIDS 32 [2003] 68-85; numbers in parentheses after the dates refer to sources cited in that article)

As “AIDS” was being diagnosed increasingly among people less desperately ill than the original AIDS victims, survival time after diagnosis became longer.

The 1993 change extended the umbrella of “AIDS patient” to cover people with no manifest symptoms of ill health; in ordinary parlance, they weren’t ill, and consequently the interval between an AIDS diagnosis and death was bound to increase dramatically. This re-definition also expanded enormously the number of “AIDS cases”: about 70% of them are not ill (Walensky et al., Journal of Infectious Diseases 194 [2006] 11-19, at p. 16).

In 1996, earlier treatment for AIDS with high-dose reverse transcriptase inhibitors like AZT (ZDV, Retrovir) was increasingly superseded by “highly active antiretroviral treatment” (HAART), which has been generally credited with the prolonging of lives by a considerable number of years. According to the Antiretroviral Therapy Collaboration (Lancet 372 [2008] 293-99), life expectancy for 20-year-old HIV-positives had increased by 13 years between 1996 and 2005 to an additional 49 years; for 35-year-olds, the life expectancy in 1996-99 was said to be another 25 years. According to Walensky et al. (op. cit.), survival after an AIDS diagnosis now averages more than 14 years. Yet another encomium to antiretroviral drugs claims that “by 2004-2006, the risk of death in the first 5 years following seroconversion was similar to that of the general population” (Bhaskaran et al., JAMA 300 [2008] 51-59).

There is general agreement, then, that antiretroviral treatment has yielded substantial extension of life to people already diagnosed with AIDS. The interval between an AIDS diagnosis and death should now be measured in decades rather than a year or two.

As with so many other contentions of orthodox HIV/AIDS belief, however, this expectation is contrary to actual fact. The greatest risk of death from “HIV disease” comes at ages in the range of 35-45, just as at the beginning of the AIDS era. There was no dramatic increase in median age of death after 1996 following the adoption of HAART, see Table 2:

Table 2
Age Distributions of AIDS Diagnoses and AIDS Deaths, 1982-2004
from annual “Health, United States” reports http://www.cdc.gov/nchs/products/pubs/pubd/hus/previous.htm#editions

The slow, steady increase in median ages of AIDS diagnosis and of death shown in Table 2 is pictured in Figure 1, below. The slope of the curve for median age of death shows no pronounced turn upwards following 1996 — even though the annual numbers of deaths decreased by more than half between 1994 and 1998. The somewhat steeper increase in median age of death from 1997 to 1999 and the parallel sharper increase in median age of AIDS diagnosis are both artefacts stemming from re-calculation of numbers under a revised International Diagnostic Code, see asterisked footnote to Table 2. The other slight discontinuity in the curve, around 1993, reflects the CDC’s revised definition of AIDS to include asymptomatic HIV-positive people with low CD4 counts.

Figure 1

The uppermost curve, the interval between median age of diagnosis and median age of death underscores that over the whole course of the AIDS era, no episode brought a significant increase in median age of death, other than the drastic expansion of definition in 1992-93. (Of course, the difference between the median ages for diagnosis and death in any given year cannot be equated with the interval between diagnosis and death for any given individual; the significant point in Figure 1 is just that median ages have changed at a gradual and almost constant rate from the very beginning of the AIDS era. HAART changed the death rate dramatically, but not the ages at which people died.)

This constitutes a major conundrum, a paradox: If HAART has extended life-spans by the claimed amounts, then why has not the median age of death increased dramatically? Why were so many AIDS patients still dying around age 45 in 2004?

The resolution of this conundrum is that the median ages of death are based on actually recorded deaths, whereas the claimed benefits of HAART were calculated on the basis of models incorporating many assumptions about the course of “HIV disease” and relying on contemporaneous death-rates [Science Studies 103: Science, Truth, Public Policy — What the CDC should know but doesn’t, 4 September 2008; CDC’s “model” assumptions (Science Studies 103a), 6 September 2008].

The numbers for total AIDS cases and for deaths, shown graphically in Figure 1, are listed in Table 3. There, column III shows the numbers of survivors in any given year, calculated from the difference between cases and deaths in earlier years plus new cases in the given year. Column IV has the percentage of survivors who died each year.

Table 3
Total AIDS cases, deaths, and
survivors “living with HIV/AIDS”,
1982-2004

From 1996 to 1997, the annual numbers of deaths halved, and of course the percentage of deaths among survivors also halved. Since 1997, only between 2.8 and 5.7% of living “HIV/AIDS” patients have been dying annually, which is in keeping with the claims of life-saving benefits made for HAART on the basis of death rates and computer models. But that conflicts with the age distribution of deaths, which has remained without major change during those same years.

If AIDS patients are now enjoying a virtually normal life-span, who are the people still dying at median age 45? If HAART is saving lives, why aren’t those lives longer?

The reason is that testing “HIV-positive” is actually irrelevant to the cause of death. It does not indicate infection by a cause of illness, it is an indicator analogous to fever. Many conditions may stimulate a positive “HIV” test: vaccination against flu or tetanus, for example; or tuberculosis; or drug abuse; or pregnancy; and many more (Christine Johnson, “Whose antibodies are they anyway? Factors known to cause false positive HIV antibody test results”, Continuum 4 (#3, Sept./Oct. 1996).

The likelihood that any given individual exposed to one of those conditions will actually test positive seems to correlate with the seriousness of the challenge to health; and it varies in a predictable manner with age, sex, and race (The Origin, Persistence and Failings of HIV/AIDS Theory). In any group of people, those who test “HIV-positive” are more likely to be or to become ill, so they are also more likely to die than those who do not test positive: just as in any group of people, those who have a fever are more likely to be ill and to die than those who do not have a fever. Also, of course, a fever does not necessarily presage death, nor does “HIV-positive” necessarily presage death; and in any group of people, some will die who never tested positive or who never had a fever. There’s a strong correlation between illness, death, and fever, but it’s not an inevitable one and fever is not the causative agent; there’s a strong correlation between illness, death, and “HIV-positive”, but it’s not an inevitable one and “HIV” is not the causative agent.

So: Among people “living with HIV/AIDS”, those who happen to die in any given year are simply ones whose “HIV-positive” status was associated with some actually life-threatening illness; and their ages were distributed just as ages are distributed in any group of “HIV-positive” people, with a median age at around 40, with minor variations depending on race and sex. For example, in 2000, there were more than 350,000 people “living with HIV/AIDS” (Table 3) whose median age was somewhere around 39.9 (Table 2: 39.9 was the median age of new diagnoses in that year. Survivors from the previous year , when the median age had been 39.4, would have had a median age — one year later — somewhere between 39.4 and 40.4; not as much as 40.4, because those dying in 1999 had a higher median age than those who didn’t die.) Of the 350,000 in 2000 with median age 39.9, 3.9% (14,457, Table 3) died; and the median age of those dying was 42.7. It’s only to be expected, of course, that — among any group of people at all — those who die have a somewhat higher average age than those who don’t die in that year.

The rate of death among “HIV/AIDS” patients declined markedly from 1987 to 1992 simply because “HIV/AIDS” was being increasingly defined to include illnesses less life-threatening than the original AIDS diseases of Kaposi’s sarcoma and established opportunistic fungal infections. Another sharp drop in death rates came after 1992 when people who were not even ill came to be classed as “HIV/AIDS” patients and comprised about 70% of such patients. The last sudden drop in death rates, with the introduction of HAART in 1996, resulted not from any lifesaving benefit of HAART but because the latter superseded the earlier, much more toxic, high-dose regimens of AZT. The supposed benefits of HAART are to decrease viral load and allow CD4 counts to rise; but these effects come slowly and cannot explain a sudden improvement in clinical condition sufficient to bring a halving of deaths from one year to the next; on the other hand, stopping the administration of a highly toxic substance can certainly bring numbers of deaths down immediately. These data indicate, therefore, that something like half (at least) of “HIV/AIDS” deaths from 1987 through 1996 — some 150,000 — are attributable to the toxicity of AZT.

Through all those drastic as well as slower changes in death rates, among those “HIV/AIDS patients” who died for any one of a large variety of reasons, the median age of the “HIV-positive” ones remained about the same as it had always been. “HIV/AIDS” patients are not living longer despite the change in death rate from an annual 60% or more to 3% or less.

As I said in a previous post [How “AIDS Deaths” and “HIV Infections” Vary with Age — and WHY, 15 September 2008], this paradox follows “from the manner in which HIV tests were designed and from the fact that AIDS was defined in terms of ‘HIV’”. The genesis of the tests has been described lucidly by Neville Hodgkinson (“HIV diagnosis: a ludicrous case of circular reasoning”, The Business, 16/17 May 2004, pp 1 and 4; similar in “The circular reasoning scandal of HIV testing”, thebusinessonline, 21 May 2006):

“It never proved possible to validate the [HIV] tests by culturing, purifying and analysing particles of the purported virus from patients who test positive, then demonstrating that these are not present in patients who test negative. This was despite heroic efforts to make the virus reveal itself in patients with Aids [sic, British usage] or at risk of Aids, in which their immune cells were stimulated for weeks in laboratory cultures using a variety of agents.
After the cells had been activated in this way, HIV pioneers found some 30 proteins in filtered material that gathered at a density characteristic of retroviruses. They attributed some of these to various parts of the virus. But they never demonstrated that these so-called ‘HIV antigens’ belonged to a new retrovirus.
So, out of the 30 proteins, how did they select the ones to be defined as being from HIV? The answer is shocking, and goes to the root of what is probably the biggest scandal in medical history. They selected those that were most reactive with antibodies in blood samples from Aids patients and those at risk of Aids.
This means that ‘HIV’ antigens are defined as such not on the basis of being shown to belong to HIV, but on the basis that they react with antibodies in Aids patients. Aids patients are then diagnosed as being infected with HIV on the basis that they have antibodies which react with those same antigens. The reasoning is circular.”

“HIV” tests were created to react most strongly to substances present in the sera of very ill gay men whose average age was in the late 30s (Michelle Cochrane, When AIDS began: San Francisco and the making of an epidemic, Routledge, 2004; cited at pp. 188-92 in The Origin, Persistence and Failings of HIV/AIDS Theory). That’s why people who are in some manner health-challenged are more likely than others to test “HIV-positive”, especially if they are aged around 40. Evidently the particular molecular species picked up by “HIV” tests are generated most prolifically around age 40, especially under the stimulation of various forms and degrees of physiological stress. That’s why the median ages for testing “HIV-positive” and for being diagnosed with AIDS (criterion: positive HIV test) and for dying from HIV/AIDS  (criterion: positive HIV test) are all the same, in the range 35-45.

Perhaps some of what “HIV” tests detect are so-called “stress” or “heat-shock” proteins. That gay men so often test “HIV-positive” might have to do with molecular species associated with “leaky gut syndrome” or other consequences of intestinal dysbiosis [What really caused AIDS: slicing through the Gordian knot, 20 February 2008].

Those are speculations, of course. What is not speculative, however, is that HAART does not prolong life* even as it lowers death rates. It is also clear that testing “HIV-positive” is no more than an indicator of some form of physiological challenge, not necessarily infection by a pathogen and specifically not infection by a retrovirus that destroys the human immune system.

————————————————-
* FOOTNOTE:
Even as it is obvious that HAART does not prolong life on the average, there are reliable testimonies that individuals have experienced clinical improvement on HAART, often dramatic and immediate. But, again, such immediate benefit cannot be the result of antiretroviral action, and likely reflects an antibiotic or anti-inflammatory effect, as suggested by Dr. Juliane Sacher [Alternative treatments for AIDS, 25 February 2008].

Posted in antiretroviral drugs, HIV and race, HIV as stress, HIV does not cause AIDS, HIV tests, HIV varies with age, HIV/AIDS numbers | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 5 Comments »

 
%d bloggers like this: