HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘Juliane Sacher’

What do CD4 counts mean?

Posted by Henry Bauer on 2010/01/29

The level of CD4 cells in peripheral blood is a prime criterion for diagnosing AIDS (in the United States in particular) and for monitoring antiretroviral treatment. However, these applications of CD4 counts stem from the initial and unhappy coincidence that when “AIDS” appeared around 1980, the counting of immune-system cells was in its infancy. By now it is known that CD4 levels are extremely variable in healthy individuals, and that a variety of physiological conditions other than “HIV” may profoundly influence CD4 counts. There seems to be no fundamental evidential warrant for the manner in which HIV/AIDS diagnosis and treatment rely on CD4 counts. Juliane Sacher among others has pointed out that the levels of CD4 cells in peripheral blood are not a meaningful measure of immune-system status, since these cells move around the body according to where they seem to be needed [Alternative treatments for AIDS, 25 February 2008].

An obvious question: what is the range of CD4 counts in healthy individuals and in a variety of illnesses? (I’m grateful to Tony Lance for alerting me to some of the intriguing sources mentioned in the following).

One of the striking aspects of CD4 counts is how enormously they vary among individuals, including healthy individuals. Here, for example, are data from HIV-negative Senegalese:

C. Mair, S. E. Hawes, H. D. Agne, P. S. Sow, I. N’doye, L. E. Manhart, P. L. Fu, G. S. Gottlieb and N. B. Kiviat. Factors associated with CD4 lymphocyte counts in HIV-negative Senegalese individuals. Clinical and Experimental Immunology 151 (2007) 432-440

In any normal distribution, the standard deviation (s.d. or σ) describes the degree of scatter around the average (or mean) value. Only about 2/3 of a sample are within (±) 1 σ; in other words, about 1/6 are further from the mean on both the higher and the lower sides. In the Table above, among the men with mean CD4 count of 712, σ = 333, about 1 in every 6 men have CD4 counts below 379 or above 1045; and about 2% have counts more than 2σ above and below 712 , that is >1378) and <50. CD4 = 200 is about 1.5σ below the mean, which corresponds to about 6-7% (~1/15) of the sample. In other words, about 1 in every 15 healthy HIV-negative Senegalese men has CD4 counts below the 200 that, in HIV-positive people, is taken to be a sign of AIDS.

Of course, CD4 counts may not follow a normal distribution, especially at upper and lower levels; but since this article reports means and standard deviations without specifying a different distribution, the authors themselves are presuming it is normal. Moreover, a similarly wide range of CD4 counts and an approximation to normal distribution is shown in other data sets as well. For example, healthy North Indians were reported to have a mean CD4 count of 720 with σ = 273 and an actually observed range of 304-1864 among 200 individuals; 10% were below 400, consistent with a normal distribution which would have about 16% below 450 (Ritu Amatya, Madhu Vajpayee, Shweta Kaushik, Sunita Kanswal, R.M. Pandey, and Pradeep Seth. “Lymphocyte immunophenotype reference ranges in healthy Indian adults: implications for management of HIV/AIDS in India”. Clinical Immunology 112 [2004] 290-5). Actual distributions for several African populations, however, show a skewing toward higher CD4 counts, which indeed seems highly plausible a priori — one might expect to see a definite lower bound to CD4 counts in healthy individuals (Williams et al., “HIV infection, antiretroviral therapy, and CD4+ cell count distributions in African populations”, J Inf. Dis. 194 [2006] 1450-8).

Worth particular note is the comment in Amatya et al. that “These low counts could be due to physiological lymphopenia potentially caused by protein energy malnutrition, aging, antigenic polymorphism of the CD4 molecule, prolonged sun exposure, circadian rhythm, and circannual variation [9,10]”. The use of contraceptive pills by women has also been reported to influence CD4 counts (M. K. Maini, R. J. Gilson, N. Chavda, S. Gill, A. Fakoya, E. J. Ross, A. N. Phillips and I. V. Weller. “Reference ranges and sources of variability of CD4 counts in HIV-seronegative women and men”. Genitourin Med 72 [1996) 27-31]. Most of those circumstances do not represent illness. So CD4 counts can be low for a variety of fairly normal, not seriously health-threatening conditions. It follows that reliance on CD4 counts as diagnostic of “HIV disease” increases the danger that some unknown number of “HIV-positive” individuals are being told on the basis of laboratory tests — sometimes SOLELY on the basis of laboratory tests — that they are actually sick even though they feel and actually are healthy; and these people are then at risk of being consigned to toxic “treatment” for this imaginary illness. The risk is greatest if the blood tested for CD4 counts happens to have been drawn in the morning, or in the wrong season of the year, because CD4 counts vary appreciably with both those variables: T. G. Pagleironi et al., “Circannual variation in lymphocyte subsets, revisited”, Transfusion 34 [1994] 512-6; F. Hulstaert et al., “Age-related changes in human blood lymphocyte subpopulations”, Clin. Immunol. Immunopathol. 70 [1994] 152-8. Maini et al. (above) report a 60% variation during the day with lowest counts at 11 am. Yet another report describes a similarly large diurnal variation, from 820 at 8 am to 1320 at 10 pm (Bofill et al., “Laboratory control values for CD4 and CD8 T lymphocytes. Implications for HIV-1 diagnosis”, Clin. Exp. Immunol. 88 [1992] 243-52).

Just as with the tendency to test “HIV-positive”, CD4 counts are influenced by demographic variables: “race, ethnic origin, age group, and gender” (Amatya et al.). Bofill et al. also report a steadily decreasing CD4 count with increasing age. The contrary has been reported, however, by Jiang et al. (“Normal values for CD4 and CD8 lymphocyte subsets in healthy Chinese adults from Shanghai”, Clinical and Diagnostic Laboratory Immunology, 11 [2004] 811-3). The discrepancy may be owing to differing attitudes toward statistical significance: the raw numbers in Jiang et al. do show an increase with age for men and a decrease with age for women but, as with the data of Bofill et al. and all others, the standard deviations are so large, on the order of one third of the mean values, that differences and trends would have to be very considerable if they are to be statistically meaningful.

Again, Jiang et al. report no difference between Chinese men and women, whereas several sources cite women as having higher CD4 counts than men: in Britain (Maini et al.) and in more than dozen other countries in Africa, Asia, and Europe (Mair et al.). Caucasians have higher CD4 counts than Asians or Africans, according to Amatya et al. and Jiang et al., but not according to Maini et al.

All these variations under the influence of several factors would make the diagnostic application of CD4 counts problematic even if “HIV” or “AIDS” had been shown to be the salient influence on CD4 levels. However, just as with the tendency to test “HIV-positive”, CD4 counts may be “low” in a wide range of conditions; perhaps most relevant to HIV/AIDS, in tuberculosis and general trauma, as well as with primary immunodeficiency, early acute phases of such viral infections as influenza, or Dengue fever (Bofill et al.) or recent respiratory infections (Maini et al.).

Not only are CD4 counts dubious for diagnosis or prognosis; just as with the tendency to test “HIV-positive”, CD4 counts generate a number of conundrums if interpreted according to HIV/AIDS theory: the counts are often HIGHER rather than lower in conditions generally regarded as associated with poor health. For example, smokers have higher CD4 counts than non-smokers (Maini et al., Mair et al.) and prostitutes have higher counts than other women (Mair et al.). Another “striking paradox” is in “co-infection” with “HIV” and herpes:
“We observed no effect of HSV-2 status on viral load. However, we did observe that treatment naïve, recently HIV-1 infected adults co-infected with HSV-2+ at the time of HIV-1 acquisition had higher CD4+ T cell counts over time. If verified in other cohorts, this result poses a striking paradox, and its public health implications are not immediately clear” (emphases added; Barbour et al., “HIV-1/HSV-2 co-infected adults in early HIV-1 infection have elevated CD4+ T-Cell counts”, PLoS ONE 2(10) [2007] e1080).


There seems to be no clear warrant for diagnosing AIDS by means of CD4 counts, which may be why other countries have not followed the US example of taking <200 as a criterion. Similarly, there seems to be no clear warrant for assessing the progress of antiretroviral treatment by means of CD4 counts. Two practical illustrations of that are the fact that CD4 counts do not correlate with (or, changes in are not predicted by)  “viral load” (Rodriguez et al., JAMA, 296 [2006] 1498-1506), and that the NIH Treatment Guidelines distinguish immunologic failure (no increase in CD4 counts) from virologic failure (no decrease in viral load) and from clinical progression (does the patient’s health improve?).

A somewhat related illustration of the failure of HIV/AIDS theory is that “AIDS” patients with Kaposi’s sarcoma may have quite high CD4 counts: see for example Maurer T, Ponte M, Leslie K. “HIV-Associated Kaposi’s Sarcoma with a High CD4 Count and a Low Viral Load”. N Engl J Med 357 (2007) 1352-3; Krown SE, Lee JY, Dittmer DP, AIDS Malignancy Consortium. “More on HIV-Associated Kaposi’s Sarcoma” N Engl J Med 358 (2008) 535-6; D.G. Power, P. J. Mulholland K. J. O’Byrne. “AIDS-related Kaposi’s Sarcoma in a Patient with a Normal CD4 Count”. Clinical Oncology 20 (2008) 97; Stebbing J, Powles T, Bower M. AIDS-associated Kaposi’s sarcoma associated with a low viral load and a high CD4 cell count. AIDS 22 (2008) 551-2. Mani, D., Neil, N., Israel, R., Aboulafia, D. M. “A retrospective analysis of AIDS-associated Kaposi’s Sarcoma in patients with undetectable HIV viral loads and CD4 counts greater than 300 cells/mm3”. J Int Assoc Physicians AIDS Care (Chic Ill) 8 (2009) 279-85.

But then it has also long been known that “AIDS” Kaposi’s sarcoma is not caused by HIV, it’s now attributed to KSHV or HHV-8, which just happened — by the sort of extraordinary coincidence or oddity that is so common in HIV/AIDS matters — just happened to appear at the same time among the same risk groups as “AIDS” and “HIV” did; and then just as mysteriously went a separate path, so that KS declined from about 40% of all “AIDS” case in 1982 to well under 10% from 1987 onwards (Table 30, p. 128 in The Origin, Persistence and Failings of HIV/AIDS Theory).

More sales in the offing for snake oil and Brooklyn Bridges.

Posted in antiretroviral drugs, HIV risk groups, HIV skepticism, M/F ratios | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 19 Comments »

“HIV” and illness: Which comes first?

Posted by Henry Bauer on 2009/07/23

According to HIV/AIDS theory, “HIV” — whatever it is that is detected by “HIV” tests — precedes damage to the immune system and consequent illness.

Rethinkers and Skeptics, however, claim the opposite:
According to the Perth Group, “HIV-positive” is merely a symptom of oxidative stress.
According to Duesberg, the presence of “HIV” indicates a condition by which “HIV” is generated as a harmless “passenger” side-effect.
A comparison of “HIV-positive” frequency across population sub-groups indicates that the general state of health or fitness correlates with the tendency to test “HIV-positive”
(The Origin, Persistence and Failings of HIV/AIDS Theory, Figure 22, p. 83)

Specific observations that support the Rethinker view include:
Flu vaccination can lead to a positive “HIV” test
Anti-tetanus likewise
and more such instances in Christine Johnson’s classic enumeration.

A recent article not only adds further confirmation to the Rethinker case, it lends considerable specific support to Tony Lance’s hypothesis that intestinal dysbiosis can lead to testing “HIV-positive”, to dysfunction of the immune system, and to the fungal infections that were the first opportunistic infections described as “AIDS”:
Melinda Wenner, “A cultured response to HIV”, Nature Medicine, 15 (2009) 594-7.

A summary of that article is on-line at TheBody. Have a look at Liang’s comment: “I was very prone to diarrhea and gum infection before being hiv positive.”

In the Nature Medicine article, there’s something similar:
“’It’s almost like the gut is a magnet for the virus early on. [It] becomes compromised in weeks,’ says Bill Critchfield, a postdoctoral fellow at the University of California–Davis.”
A diagnosis of “HIV-positive” will typically follow some signs of illness that led to a doctor’s visit. However, there will rarely or never be any prior knowledge of the condition of the gut. According to the orthodoxy, “HIV” does its work very slowly, not “within weeks”. Ergo: this too is eminently consistent with the hypothesis that damage to the intestinal flora precedes testing “HIV-positive”.
The mainstream has increasingly acknowledged the relation between gut and “HIV”, without yet realizing that this supports the dysbiosis hypothesis and not the HIV/AIDS one.
It’s also worth noting that CD4 counts in the blood continue to be cited by mainstream researchers even as they begin to glimpse that it’s the gut where the action is. As Juliane Sacher (among others) has pointed out, immune-system cells move around the body according to where they’re needed, and the level in the blood cannot be taken as an indication of depletion or increase overall.

Note, too, that when Western sources advocate a natural — dare I say naturopathic? — treatment for “HIV”, in this case probiotic yogurt, it isn’t immediately greeted with cries of “pseudo-science”. That’s reserved for non-Westerners who make similar suggestions and for individuals like Matthias Rath, MD, one-time research colleague of Linus Pauling.

Posted in Alternative AIDS treatments, HIV as stress, HIV does not cause AIDS, HIV risk groups, HIV skepticism, HIV tests | Tagged: , , , , , , , , | 10 Comments »

CD4 counts don’t count — OFFICIAL!

Posted by Henry Bauer on 2009/02/14

For a very long time, the central belief in HIV/AIDS theory has been that “HIV” kills CD4 cells (albeit by a mechanism that still remains to be identified), thereby wrecking the immune system and allowing opportunistic infections to take over. Measurements of peripheral (in the blood) CD4 cells have been a mainstay in research and treatment. Voices raised to point out the error of this, those of  Heinrich Kremer or Juliane Sacher among others, have been studiously ignored. But now it’s become quite official:

“’In both studies, the volunteers who received IL-2 and antiretrovirals experienced notable, sustained increases in CD4+ T cell counts, as anticipated,’ notes NIAID Director Anthony S. Fauci, M.D. ‘Unfortunately, these increases did not translate into reduced risks of HIV-associated opportunistic diseases or death when compared with the risks in volunteers who were taking only antiretrovirals. Although further analyses may help us better understand these findings, the two studies clearly demonstrated that the use of IL-2 did not improve health outcomes for HIV-infected people.’”

That paragraph is from an official release by the National Institute of Allergy and Infectious Diseases (NIAID), “IL-2 immunotherapy fails to benefit HIV-infected individuals already taking antiretrovirals”

Increased CD4 counts do not translate into better health outcomes
for people on HAART —
even though the aim of HAART is supposed to be lower viral load
that supposedly allows rebounding of CD4 counts

That could already have been inferred, of course, from the publication by Rodriguez et al., “Predictive value of plasma HIV RNA level on rate of CD4 T-cell decline in untreated HIV infection”, JAMA, 296 [2006] 1498-1506: the predictive value is NIL; viral load doesn’t predict CD4 decline in untreated patients; so why expect that it would do so in  HAART-treated patients? But these IL-2 trials had been running since 1999 and 2000 respectively, so why cut them short just because research has shown them to be superfluous or misguided? Or just because the experts who draw up NIH’s Treatment Guidelines have also been sure for some time that CD4, viral load, and patient health do not correlate with one another, they are independent of one another — that’s why the Treatment Guidelines have to distinguish among “virologic failure” (viral load doesn’t decrease under treatment), “immunologic failure” (CD4 counts don’t increase), and “clinical failure” (operation succeeds, viral load down and CD4 up, patient dies).

Mere facts, though, have never been particularly meaningful in HIV/AIDS research. Anything that clearly contradicts HIV/AIDS theory is not accepted as falsification, instead it’s taken as a mystery to be solved. More from the recent NIAID release:

“These are the findings of two large international clinical trials presented today at the Conference on Retroviruses and Opportunistic Infections (CROI) in Montreal. . . .
IL-2 is produced naturally in the body and plays an important role in regulating CD4+ T cell production and survival. As their CD4+ T cell levels drop, people infected with HIV become more vulnerable to AIDS-related opportunistic diseases and death. Earlier research established that giving synthetic IL-2 plus antiretroviral therapy to people with HIV infection boosts their CD4+ T cell counts more than does antiretroviral therapy alone, but it was unknown whether this boost translated into better health [emphasis added]”.

It’s asserted (highlighted sentence above) as though known with certainty that lower CD4 means worse prognosis; yet

“ESPRIT and SILCAAT were designed to test whether giving IL-2 to HIV-infected individuals already on antiretroviral therapy would keep them healthier longer than HIV-infected individuals taking only antiretrovirals.”

If the highlighted assertion above had been right, then these tests were not needed. If they were needed, then the assertion should not have been made.

These clinical trials themselves appear to have been sound; and they looked at CD4 counts in both ranges of interest — there have been long-standing questions about the optimum CD4 counts at which antiretroviral treatment might best begin:

“Together, the ESPRIT and SILCAAT studies involved more than 5,800 HIV-infected volunteers in 25 countries. Participants were assigned at random to receive either combination antiretroviral therapy alone or combination antiretrovirals plus injections of Proleukin (Novartis Pharmaceuticals, Basel, Switzerland), a synthetic form of IL-2, over several five-day cycles. To evaluate the effects of IL-2 treatment at different stages of HIV infection, the ESPRIT study enrolled people with early-stage infection (CD4+ T cell counts at or above 300 cells per cubic millimeter, or mm3), while the SILCAAT study enrolled volunteers with later-stage HIV infection (CD4+ T cell counts between 50 and 299 cells/ mm3).
It is unclear why increased CD4+ T cell counts did not translate into better health outcomes.”

What’s unclear? Increased CD4 doesn’t produce better prognoses. HIV/AIDS theory is wrong. But of course that’s unthinkable:

“James D. Neaton, . . .  principal investigator of the global clinical trials network that conducted ESPRIT, offers two possible explanations. ‘It could be that the types of CD4+ T cells induced by IL-2 play no role in protecting the HIV-infected patient, and therefore the administration of IL-2 has no benefit,’ says Dr. Neaton. ‘A second possibility is that the CD4+ T cells are at least somewhat functional or that IL-2 has some modest benefit, but that the side effects of IL-2 may neutralize any possible benefit.’
‘. . .although a person’s number of CD4+ T cells is a key measure of success in the treatment of HIV with antiretroviral drugs, we can’t rely on CD4+ T cell counts to predict whether immune-based therapies such as IL-2 will improve the health of HIV-infected individuals,’ concludes Dr. Levy, the principal investigator of SILCAAT.”

If CD4 counts don’t predict what “immune-based” therapies can do . . .
BUT these CD4s are the immune-system cells that have been accepted for a quarter century as the critical ones in HIV/AIDS, the ones that are supposedly killed off by “HIV” — so isn’t EVERY therapy that seeks to increase CD4 an “immune-based” therapy?

If the problem is with the particular TYPE of CD4 cells, these results would be just as damaging to HIV/AIDS theory and practice, since it would mean that faulty or meaningless measures have been used for more than two decades to make life-or-death decisions as to antiretroviral treatment.

Still, the important thing to note is that these trials, though they failed, were actually successful:

“’The purpose of clinical research is to clearly state and accurately test hypotheses with an ultimate goal of improving patient care,’ notes H. Clifford Lane, M.D., director of clinical research at NIAID and a member of the executive committee of ESPRIT. ‘These two clinical trials successfully reached a definitive answer about the utility of IL-2 therapy for treating HIV infection. NIAID thanks the thousands of dedicated volunteers and investigators who made these studies possible. The results will have significant implications for the future development of immune-based therapies for HIV and studies of HIV pathogenesis.’”

But perhaps this was just official spin for public consumption, for at least one other similar trial was abandoned:

“NIAID has discontinued the use of IL-2 in a separate, 20-country clinical trial known as STALWART (which stands for ‘Study of Aldesleukin with and Without Antiretroviral Therapy’).”

I don’t know about SILCAAT, but I do like those acronyms ESPRIT and STALWART. Perhaps NIAID wordsmiths get their inspiration from the Pentagon.

Posted in antiretroviral drugs, clinical trials, experts, HIV does not cause AIDS | Tagged: , , , , , , , , , , , | 29 Comments »

HAART saves lives — but doesn’t prolong them!?

Posted by Henry Bauer on 2008/09/17

Death rates are down, yet AIDS patients are not living longer! Why not?

(This is a long post, and includes at least one Table that is too large to be viewed conveniently in the same window as the text. If you prefer to read it as a pdf, here it is: haartdoesnt-prolong-lives)

In the early 1980s, a diagnosis of “AIDS” typically had been followed by death within a year or two. At that time, diagnosis was on the basis of Kaposi’s sarcoma or of manifest opportunistic fungal infections — Pneumocystis carinii pneumonia or candidiasis.

Following the adoption of “HIV-positive” as a necessary criterion for an AIDS diagnosis, an increasing range of non-opportunistic infections and other illnesses came to be included as “AIDS-defining” (for instance, tuberculosis, wasting, cervical cancer, etc.) — see Table 1; the most consequential changes were in 1987 and in 1993. The only basis for them was that people with some illnesses were quite often “HIV-positive”, in other words, there were correlations with “HIV-positive” status, not any proof that “HIV encephalopathy”, “HIV wasting disease”, or other additions to the list of “AIDS-defining” conditions were caused by “HIV”. Indeed, there could not be such proof since mechanisms by which “HIV” could cause illness have not been demonstrated, and they remain to this day a matter for speculation — even over the central issue of how HIV (supposedly) kills immune-system cells. An absurd consequence of these re-definitions, often cited by HIV/AIDS skeptics, is that a person suffering indisputably from tuberculosis (say) might or might not be classed as an HIV/AIDS patient, depending solely on “HIV” tests.

Table 1

(from Nakashima & Fleming, JAIDS 32 [2003] 68-85; numbers in parentheses after the dates refer to sources cited in that article)

As “AIDS” was being diagnosed increasingly among people less desperately ill than the original AIDS victims, survival time after diagnosis became longer.

The 1993 change extended the umbrella of “AIDS patient” to cover people with no manifest symptoms of ill health; in ordinary parlance, they weren’t ill, and consequently the interval between an AIDS diagnosis and death was bound to increase dramatically. This re-definition also expanded enormously the number of “AIDS cases”: about 70% of them are not ill (Walensky et al., Journal of Infectious Diseases 194 [2006] 11-19, at p. 16).

In 1996, earlier treatment for AIDS with high-dose reverse transcriptase inhibitors like AZT (ZDV, Retrovir) was increasingly superseded by “highly active antiretroviral treatment” (HAART), which has been generally credited with the prolonging of lives by a considerable number of years. According to the Antiretroviral Therapy Collaboration (Lancet 372 [2008] 293-99), life expectancy for 20-year-old HIV-positives had increased by 13 years between 1996 and 2005 to an additional 49 years; for 35-year-olds, the life expectancy in 1996-99 was said to be another 25 years. According to Walensky et al. (op. cit.), survival after an AIDS diagnosis now averages more than 14 years. Yet another encomium to antiretroviral drugs claims that “by 2004-2006, the risk of death in the first 5 years following seroconversion was similar to that of the general population” (Bhaskaran et al., JAMA 300 [2008] 51-59).

There is general agreement, then, that antiretroviral treatment has yielded substantial extension of life to people already diagnosed with AIDS. The interval between an AIDS diagnosis and death should now be measured in decades rather than a year or two.

As with so many other contentions of orthodox HIV/AIDS belief, however, this expectation is contrary to actual fact. The greatest risk of death from “HIV disease” comes at ages in the range of 35-45, just as at the beginning of the AIDS era. There was no dramatic increase in median age of death after 1996 following the adoption of HAART, see Table 2:

Table 2
Age Distributions of AIDS Diagnoses and AIDS Deaths, 1982-2004
from annual “Health, United States” reports

The slow, steady increase in median ages of AIDS diagnosis and of death shown in Table 2 is pictured in Figure 1, below. The slope of the curve for median age of death shows no pronounced turn upwards following 1996 — even though the annual numbers of deaths decreased by more than half between 1994 and 1998. The somewhat steeper increase in median age of death from 1997 to 1999 and the parallel sharper increase in median age of AIDS diagnosis are both artefacts stemming from re-calculation of numbers under a revised International Diagnostic Code, see asterisked footnote to Table 2. The other slight discontinuity in the curve, around 1993, reflects the CDC’s revised definition of AIDS to include asymptomatic HIV-positive people with low CD4 counts.

Figure 1

The uppermost curve, the interval between median age of diagnosis and median age of death underscores that over the whole course of the AIDS era, no episode brought a significant increase in median age of death, other than the drastic expansion of definition in 1992-93. (Of course, the difference between the median ages for diagnosis and death in any given year cannot be equated with the interval between diagnosis and death for any given individual; the significant point in Figure 1 is just that median ages have changed at a gradual and almost constant rate from the very beginning of the AIDS era. HAART changed the death rate dramatically, but not the ages at which people died.)

This constitutes a major conundrum, a paradox: If HAART has extended life-spans by the claimed amounts, then why has not the median age of death increased dramatically? Why were so many AIDS patients still dying around age 45 in 2004?

The resolution of this conundrum is that the median ages of death are based on actually recorded deaths, whereas the claimed benefits of HAART were calculated on the basis of models incorporating many assumptions about the course of “HIV disease” and relying on contemporaneous death-rates [Science Studies 103: Science, Truth, Public Policy — What the CDC should know but doesn’t, 4 September 2008; CDC’s “model” assumptions (Science Studies 103a), 6 September 2008].

The numbers for total AIDS cases and for deaths, shown graphically in Figure 1, are listed in Table 3. There, column III shows the numbers of survivors in any given year, calculated from the difference between cases and deaths in earlier years plus new cases in the given year. Column IV has the percentage of survivors who died each year.

Table 3
Total AIDS cases, deaths, and
survivors “living with HIV/AIDS”,

From 1996 to 1997, the annual numbers of deaths halved, and of course the percentage of deaths among survivors also halved. Since 1997, only between 2.8 and 5.7% of living “HIV/AIDS” patients have been dying annually, which is in keeping with the claims of life-saving benefits made for HAART on the basis of death rates and computer models. But that conflicts with the age distribution of deaths, which has remained without major change during those same years.

If AIDS patients are now enjoying a virtually normal life-span, who are the people still dying at median age 45? If HAART is saving lives, why aren’t those lives longer?

The reason is that testing “HIV-positive” is actually irrelevant to the cause of death. It does not indicate infection by a cause of illness, it is an indicator analogous to fever. Many conditions may stimulate a positive “HIV” test: vaccination against flu or tetanus, for example; or tuberculosis; or drug abuse; or pregnancy; and many more (Christine Johnson, “Whose antibodies are they anyway? Factors known to cause false positive HIV antibody test results”, Continuum 4 (#3, Sept./Oct. 1996).

The likelihood that any given individual exposed to one of those conditions will actually test positive seems to correlate with the seriousness of the challenge to health; and it varies in a predictable manner with age, sex, and race (The Origin, Persistence and Failings of HIV/AIDS Theory). In any group of people, those who test “HIV-positive” are more likely to be or to become ill, so they are also more likely to die than those who do not test positive: just as in any group of people, those who have a fever are more likely to be ill and to die than those who do not have a fever. Also, of course, a fever does not necessarily presage death, nor does “HIV-positive” necessarily presage death; and in any group of people, some will die who never tested positive or who never had a fever. There’s a strong correlation between illness, death, and fever, but it’s not an inevitable one and fever is not the causative agent; there’s a strong correlation between illness, death, and “HIV-positive”, but it’s not an inevitable one and “HIV” is not the causative agent.

So: Among people “living with HIV/AIDS”, those who happen to die in any given year are simply ones whose “HIV-positive” status was associated with some actually life-threatening illness; and their ages were distributed just as ages are distributed in any group of “HIV-positive” people, with a median age at around 40, with minor variations depending on race and sex. For example, in 2000, there were more than 350,000 people “living with HIV/AIDS” (Table 3) whose median age was somewhere around 39.9 (Table 2: 39.9 was the median age of new diagnoses in that year. Survivors from the previous year , when the median age had been 39.4, would have had a median age — one year later — somewhere between 39.4 and 40.4; not as much as 40.4, because those dying in 1999 had a higher median age than those who didn’t die.) Of the 350,000 in 2000 with median age 39.9, 3.9% (14,457, Table 3) died; and the median age of those dying was 42.7. It’s only to be expected, of course, that — among any group of people at all — those who die have a somewhat higher average age than those who don’t die in that year.

The rate of death among “HIV/AIDS” patients declined markedly from 1987 to 1992 simply because “HIV/AIDS” was being increasingly defined to include illnesses less life-threatening than the original AIDS diseases of Kaposi’s sarcoma and established opportunistic fungal infections. Another sharp drop in death rates came after 1992 when people who were not even ill came to be classed as “HIV/AIDS” patients and comprised about 70% of such patients. The last sudden drop in death rates, with the introduction of HAART in 1996, resulted not from any lifesaving benefit of HAART but because the latter superseded the earlier, much more toxic, high-dose regimens of AZT. The supposed benefits of HAART are to decrease viral load and allow CD4 counts to rise; but these effects come slowly and cannot explain a sudden improvement in clinical condition sufficient to bring a halving of deaths from one year to the next; on the other hand, stopping the administration of a highly toxic substance can certainly bring numbers of deaths down immediately. These data indicate, therefore, that something like half (at least) of “HIV/AIDS” deaths from 1987 through 1996 — some 150,000 — are attributable to the toxicity of AZT.

Through all those drastic as well as slower changes in death rates, among those “HIV/AIDS patients” who died for any one of a large variety of reasons, the median age of the “HIV-positive” ones remained about the same as it had always been. “HIV/AIDS” patients are not living longer despite the change in death rate from an annual 60% or more to 3% or less.

As I said in a previous post [How “AIDS Deaths” and “HIV Infections” Vary with Age — and WHY, 15 September 2008], this paradox follows “from the manner in which HIV tests were designed and from the fact that AIDS was defined in terms of ‘HIV’”. The genesis of the tests has been described lucidly by Neville Hodgkinson (“HIV diagnosis: a ludicrous case of circular reasoning”, The Business, 16/17 May 2004, pp 1 and 4; similar in “The circular reasoning scandal of HIV testing”, thebusinessonline, 21 May 2006):

“It never proved possible to validate the [HIV] tests by culturing, purifying and analysing particles of the purported virus from patients who test positive, then demonstrating that these are not present in patients who test negative. This was despite heroic efforts to make the virus reveal itself in patients with Aids [sic, British usage] or at risk of Aids, in which their immune cells were stimulated for weeks in laboratory cultures using a variety of agents.
After the cells had been activated in this way, HIV pioneers found some 30 proteins in filtered material that gathered at a density characteristic of retroviruses. They attributed some of these to various parts of the virus. But they never demonstrated that these so-called ‘HIV antigens’ belonged to a new retrovirus.
So, out of the 30 proteins, how did they select the ones to be defined as being from HIV? The answer is shocking, and goes to the root of what is probably the biggest scandal in medical history. They selected those that were most reactive with antibodies in blood samples from Aids patients and those at risk of Aids.
This means that ‘HIV’ antigens are defined as such not on the basis of being shown to belong to HIV, but on the basis that they react with antibodies in Aids patients. Aids patients are then diagnosed as being infected with HIV on the basis that they have antibodies which react with those same antigens. The reasoning is circular.”

“HIV” tests were created to react most strongly to substances present in the sera of very ill gay men whose average age was in the late 30s (Michelle Cochrane, When AIDS began: San Francisco and the making of an epidemic, Routledge, 2004; cited at pp. 188-92 in The Origin, Persistence and Failings of HIV/AIDS Theory). That’s why people who are in some manner health-challenged are more likely than others to test “HIV-positive”, especially if they are aged around 40. Evidently the particular molecular species picked up by “HIV” tests are generated most prolifically around age 40, especially under the stimulation of various forms and degrees of physiological stress. That’s why the median ages for testing “HIV-positive” and for being diagnosed with AIDS (criterion: positive HIV test) and for dying from HIV/AIDS  (criterion: positive HIV test) are all the same, in the range 35-45.

Perhaps some of what “HIV” tests detect are so-called “stress” or “heat-shock” proteins. That gay men so often test “HIV-positive” might have to do with molecular species associated with “leaky gut syndrome” or other consequences of intestinal dysbiosis [What really caused AIDS: slicing through the Gordian knot, 20 February 2008].

Those are speculations, of course. What is not speculative, however, is that HAART does not prolong life* even as it lowers death rates. It is also clear that testing “HIV-positive” is no more than an indicator of some form of physiological challenge, not necessarily infection by a pathogen and specifically not infection by a retrovirus that destroys the human immune system.

Even as it is obvious that HAART does not prolong life on the average, there are reliable testimonies that individuals have experienced clinical improvement on HAART, often dramatic and immediate. But, again, such immediate benefit cannot be the result of antiretroviral action, and likely reflects an antibiotic or anti-inflammatory effect, as suggested by Dr. Juliane Sacher [Alternative treatments for AIDS, 25 February 2008].

Posted in antiretroviral drugs, HIV and race, HIV as stress, HIV does not cause AIDS, HIV tests, HIV varies with age, HIV/AIDS numbers | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 5 Comments »

More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death

Posted by Henry Bauer on 2008/07/12

HAART had supposedly saved at least 3 million years of life by 2003, thereby supposedly justifying the expenditure of $21 billion in 2006 from federal US government funds alone—how much more was disbursed or used by charities and other NGOs is not known. On examination, that claimed 3 million turned out to be 1.2 million: and since these are not lives but life-years, they represent the lives of perhaps 6% of AIDS victims [Antiretroviral therapy has SAVED 3 MILLION life-years, 1 July 2008;
HIV/AIDS SCAM: Have antiretroviral drugs saved 3 million life-years?, 6 July 2008]. Not so impressive after a quarter century of research costing >$100 billion.

Another more recently trumpeted claim of benefits from antiretroviral therapy is that the “excess mortality” ascribed to “HIV” has decreased substantially in the era of HAART (Bhaskaran et al. for the CASCADE collaboration, “Changes in the risk of death after HIV seroconversion compared with mortality in the general population”, JAMA 300 [2008]51-59). This article resembles the older one in its reliance on computer modeling to produce desired results; in addition, it displays astonishing ignorance of such HIV/AIDS basics as the latent period of 10 years between “infection” and illness; and it deserves a Proxmire Golden Fleece Award for discovering what was already known.

The methodology is described in laudable detail, which reminded me of the V-P who always got his requested budget because he submitted it as a computer print-out [Antiretroviral therapy has SAVED 3 MILLION life-years, 1 July 2008]; how many unqualified fools like me would rush in when Bhaskaran et al. talk of “the familiar Cox hazard ratio”, “Kaplan-Meier methods”, “Poisson-based model”, and use of Stata version 10 for the statistical analysis? Yet the weakness of the whole approach is separate from any possible technical flaws: assertions and assumptions are made that are demonstrably wrong. [Which is not to deny that specialists might well also question the applicability of any one or all of those mentioned techniques to this particular task. Specialists might also want more information than the statement that “The median duration of follow-up was 6.3 years (range, 1 day to 23.8 years), with 16 344 individuals (99%) having more than 1 month of follow-up” — what exactly does “follow-up” mean here? Were not all of these patients monitored throughout the study?]

Bhaskaran et al. ascribe to antiretroviral drugs the lower mortality in the HAART era compared to the pre-HAART era. It is at least equally plausible that this reduction in “excess mortality” was owing to the abandonment of high-dose AZT monotherapy. After all, deaths from AIDS in the United States about doubled from 1987 to 1990, and increased by more than another 50% from 1990 to 1995, dropping back then to 1987 levels (National Center for Health Statistics, Table 42, p. 236, in “Health, United States, 2007”; “HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008;, June 30, “Disproof of HIV/AIDS Theory”).

Bhaskaran et al. themselves admit—albeit only in by-the-way fashion in concluding comments—that their analysis is rotten at the core: “it is likely that HIV-infected individuals in our study differ from the general population in other ways”. Yes indeed! Or rather, it’s not that the studied group (HIV-positives) is “likely” to differ in multiple ways from the “control” group (HIV-negative general population), it’s a certainty that they do. On the mainstream view of HIV/AIDS, HIV-positive people have been exposed to health risks that others have not, bespeaking significant behavioral differences. On my view and that of many others, “HIV-positive” is—like a fever—an indication that the immune system has reacted against something or other, that HIV-positive people have been exposed to health challenges that HIV-negative people have not. So differences in mortality between these two groups may have nothing at all to do with “HIV”.

The gross ignorance of HIV/AIDS matters displayed in this article is illustrated by the statement, also by-the-way in the concluding comments, that “race/ethnicity are also likely to differ among HIV-infected persons”. How could these authors not know that “HIV” is found disproportionately among people of African ancestry?

Here is a further illustration of incredible ignorance of HIV/AIDS matters: “Interestingly, we found that by 2004-2006, the risk of death in the first 5 years following seroconversion was similar to that of the general population . . . further research will be needed before our finding of no excess mortality in the first 5 years of infection in 2004-2006 can be generalized beyond those diagnosed early in infection”.
Almost from the very beginning, one of the salient mysteries about the lentivirus (slow virus) HIV has been the “latent period” between presumed infection by HIV and the appearance of any symptoms of illness. That latent period is nowadays agreed to be about 10 years. Therefore there should be no excess mortality at all for an average of 10 years after infection among people not being treated with HAART, and of course for much longer if HAART staves off AIDS. Unless, of course, “HIV” is causing death in symptom-less people, so that deaths from “HIV disease” during the latent period are deaths without apparent cause. It seems unlikely that such a phenomenon would long have gone unnoticed. Here is a typical representation of the supposed progression from infection to illness and death:

The death rate shown during the putative latent period is flat and runs along the baseline.

All this makes the authors’ modest admission that “Our study has some limitations” more than a little inadequate. The many obvious deficiencies in this article, notably the ignorance of latent period, reflect unkindly not only on the authors but also on the journal, its editorial procedures, and the lack of competence or diligence of the “peer reviewers” who presumably were engaged to comment expertly on whether this deserved to be published. What on earth has happened to medical “science”? Or was it always so defective in such obvious ways?

As to Golden Fleece Awards, there is the finding that “those exposed through IDU at significantly higher risk than those exposed through sex between males”. Yes indeed, drugs are not good for you! But then it has been routine among HIV/AIDS experts to discount the risks of illegal drugs by comparison to those of “HIV”, to the extent that there are continuing campaigns to provide drug addicts with fresh, clean, needles; and occasional surprise is expressed that injecting drug users typically have health problems [COCAINE AND HEROIN AREN’T GOOD FOR YOU! — a Golden Fleece Award, 13 June 2008]. In the end, do seem to be aware of this: “It is unlikely that HIV infection is the only factor leading to increased mortality rates among those exposed through IDU” because of, among other things, “the direct risks of substance abuse”.

No less surprising (to Bhaskaran et al., that is) than the poorer health of drug addicts is the finding that older people are less able than younger people to stave off health challenges: “Older age at seroconversion was associated with a higher risk of excess mortality . . . there was a clear gradient of increasing risk of excess mortality with increasing age at seroconversion”.
In other words, the older you are when you “seroconvert”—become infected, according to mainstream views, or encounter some sort of health challenge, according to Perth-Group-type views—the more likely you are to succumb, compared to people of the same age who have not encountered the same challenge. Who would have thought it?

Yet another finding worthy of attention was that “Females were at consistently lower risk [of dying] than males”. On the one hand, even most lay people are aware that women have a greater life expectancy than men (in most countries and in all developed ones). On the other hand, might not this finding with respect specifically to “HIV-positive” have stimulated some thought among the authors, whether this means anything specifically with respect to “HIV-positive” as signifying infection by a virus?


Here, as so often, some of what I’ve written might appear to accept that HIV is infectious and causes illness. That is not so; I am merely pointing out that even on its own terms, the HIV/AIDS view would still be wrong about the claimed benefits of antiretroviral drugs: there is no evidence that they prolong life. At best, as Dr. Juliane Sacher has pointed out, they might bring a temporary benefit by acting as antibiotics, for they certainly are inimical to life.


ACKNOWLEDGMENT: I am grateful to Fulano de Tal (a commonly used pseudonym, compare “John Doe”) who pointed out that an earlier version of this post included speculations based on US data that are irrelevant here since the CASCADE study includes only European cohorts. I also added the graph in response to one of “Tal”‘s comments, because I was not able to put the graph into my response.

Posted in antiretroviral drugs, experts, Funds for HIV/AIDS, HIV absurdities, HIV and race, HIV as stress, HIV does not cause AIDS, HIV varies with age, HIV/AIDS numbers, M/F ratios | Tagged: , , , , , , , , , , , , , | 20 Comments »