HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Archive for September, 2008

No HIV “latent period”: dotting i’s and crossing t’s

Posted by Henry Bauer on 2008/09/21

“How ‘AIDS Deaths’ and ‘HIV Infections’ Vary with Age — and WHY” [15 September 2008] cited a blog post (1), an oral presentation (2), and a journal article (3) offering evidence that disproves central aspects of HIV/AIDS theory, namely, that there’s a “latent period” of roughly 10 years between “infection by HIV” and symptoms of illness and that the length of this latent period and the time from illness to death have been greatly extended by antiretroviral drugs, particularly since 1996 and HAART (“highly active antiretroviral treatment”).

According to HIV/AIDS belief, there should now be at least a couple of decades, on average, between infection and death. The data presented earlier showed that the age at which testing positive is most common is indistinguishable from the age at which death is most common —within the resolution or precision of the data, which were available in the published sources only for non-overlapping 10-year ranges (25-34, 35-44, etc., and  20-29, 30-39, etc.). The associated uncertainty of 5-10 years is immaterial when it’s a matter of detecting a claimed interval that exceeds a couple of decades, so I had left unaddressed a number of complications and corollary points. Discussing those can serve to underscore the strength of this line of argument.

Perhaps the most obvious questions to address:
1. Do the data from HIV tests and about deaths come from comparable population samples? Are the respective rates calculated on the same basis?
2. The age of first recorded positive HIV-test is not the same as the age of first becoming “infected”, i.e. HIV-positive.

1. Are the results from HIV tests comparable directly to the death statistics?
Though test results expressed as rates of testing positive may seem comparable to rates of death, there’s an important difference. If 1% of those in a given age-range of some group test positive in, say, 1985, that rate is proportional to the number of individuals who test positive and therefore can be compared directly to the rate in the same age-range in a different year — 1990, 2000, whatever —, or to the rates in other age-ranges. However, one can’t compare death rates in the same manner, because those are reported (typically per 100,000) for the specific age-range in the whole population in a given year and therefore depend on the distribution of ages within the population and how it changes over time; for example, in 1985 there were 40,000 15-24-year-olds, 41,700 25-34-year-olds, and 31,700 35-44-year-olds, whereas in 1993 the numbers were respectively 36,000, 41,900, and 40,800 (4). When the only concern is the age range in which the rate of testing positive or of death reaches a maximum, this complication is not important because the numbers in the relevant age ranges — between 15 and 45 — are not vastly different and those differences hardly impinge on the much greater variations in the death rates. But if one wants to compare the whole age distributions and not just the ages at which a maximum occurs, then one ought to use numbers of deaths and not rates.

A second question about these comparisons stems from the fact that the death statistics refer to the United States as a whole, whereas the HIV-tests were carried out at a variety of public testing sites, they were not an all-encompassing national survey: about 10 million tests in the United States were recorded for 1995-98 from clinics for drug treatment, family planning, venereal diseases, and tuberculosis, and from pre-natal or obstetric clinics, HIV counseling and testing sites, prisons, colleges, miscellaneous health departments, and private doctors (5). About 143,000 of those tests were positive during these 4 years, in other words, about 36,000 annually — though some may have been repeated positives already reported in an earlier year. Now, independent CDC Surveillance Reports for these same 4 years (6) recorded 56,000 newly diagnosed infections from the 32 States with confidential HIV reporting, 14,000 annually; extrapolated to all 50 States, there would have been very approximately 85,000 to 90,000 such reports, say 22,000 annually. Thus the numbers of positive tests from the public sites exceeded the national report of new diagnoses by about 65% (36/22 = 1.64); so even if as many as 40% (14/36) of the 143,000 positive tests at public sites had been repeat tests, they would still represent a very good sample of all newly reported positive tests in the United States during those years and would therefore be make an appropriate comparison with deaths for the whole United States.

Furthermore, a significant proportion of those tested at the public sites are likely to represent people most at risk, which would also be those among whom most of the “HIV disease” deaths could be expected to come. The 143,000 positive tests from 10 million tests at the public sites correspond to a rate of about 1.4%. For the United States as a whole, between 1989 and 2005 the estimated prevalence of HIV-positives was about 1 million, a rate of about 0.4% (7-9). For 2003, the CIA Fact Book gives 0.6%. Thus the people tested at the public sites were at higher risk of HIV than the general population, by a factor of between 2 and 4.

Altogether, then, the positive tests recorded in 1995-98 at public sites make an appropriate comparison with deaths from HIV disease in subsequent years.

2. The age of first recorded positive HIV-test is not the same as the age of first becoming HIV-positive.

In (1-3), ages of death were compared with ages of testing HIV-positive. But those are not the ages of initial infection. Numbers for “first positive test” at each age represent some combination of infections incurred during some preceding period: to the numbers for each age of “first positive test” there corresponds a distribution of actual infections at younger ages. The age distribution of first infection therefore begins at earlier ages than the first-positive-test distribution; however, it ends at the same age since first positive test cannot precede infection; therefore, the age distribution of infections is broader, covering a wider range of ages than the age distribution of positive tests (Figure 1). Furthermore, the average positive test will come no later than the end of the average latent period — that is, when symptoms of illness appear — so the age range for infections will be something like five-to-ten years broader (at the base) than the age distribution of tests. (This is again of the same order of magnitude as the uncertainty in comparing non-overlapping 10-year intervals, and was immaterial when testing the claim that deaths should have been shifted to greater ages by at least a couple of decades.)

The age at which antiretroviral treatment begins cannot precede the age of first positive test, so Figure 1 shows deaths also beginning no earlier; but this is perhaps overly conservative, since only HIV-positives with low CD4 counts, or already manifesting illness, will begin treatment at the time of first positive test. It seems reasonable to assume that the numbers of “HIV-disease” deaths before a first positive test is negligibly small.

Now, for each age of beginning treatment, there corresponds a range of expected ages of death, centered on the average life expectancy. A few people are even likely to die soon after treatment begins, but about half will survive longer than the average expectancy. Therefore the age distribution of deaths will be very much broader than the distribution of infections, which is itself broader than the distribution of first positive tests. Here’s a purely schematic illustration (in actuality, these curves are almost certainly not bell-shaped):

Figure 1
Schematic representation of expected relationships
between infection, positive HIV-test, and death

Even this is a simplified view, however, because it considers only deaths of people who tested positive in a particular year. Actually, deaths in any given year will stem from HIV-positive individuals who had been infected during a wide range of years. Could this complication nullify the expectations that the age distribution of deaths must be broader than the distribution of infections with the peak age for deaths shifted by a couple of decades from the peak age of infections?


For the age distribution of deaths stemming from a given cohort “H” (people infected in a given year) to be significantly distorted by deaths from earlier cohorts “A, B, C, . . . ” , the numbers of deaths from those earlier cohorts would have to be comparable to or greater than the deaths among cohort H and would have to be distributed in age in some radically different manner. Yet year after year, the age distribution of deaths has changed very little in shape; see Figures 2a and 2b, which are based on the data in Table 2 of an earlier post.

Figure 2a
Age Distributions of “AIDS” Deaths, 1982-2002

Numbers are correct for all years except 1982; for the latter, divide by 100.

The very small differences in shape of the whole distribution is more obvious in Figure 2b where the curves are of more nearly equal size.

Figure 2b
Age Distributions of “AIDS” Deaths
normalized to comparable scales

(Not all years are shown in Figure 2b because the curves overlap so much. The stated widths were all measured at half the height of each curve, not at the places where they are shown here. At half peak-height, the widths of all the curves is between 17 and 23 years.)

Not only the death distributions are little changed over the years, so also are data from public testing sites. A summary for the years 1999-2004 (10) reports an overall HIV-positive rate of 1.43%, from >11 million tests, the same as for 1995-98; and the shape of the distribution of positive test by age is very similar, see Figure 3. The ages for which the 1999-2004 data are given cover a greater range than in the 1995-98 reports, however, making an even better comparison with deaths, so both are shown in Figure 3, which compares HIV-positive reports with deaths for 2004. (The highest age for which HIV tests were reported in 1995-98 is ≥50, in 1999-2004 it is ≥75; the lines to those points are broken to indicate that the angle of this segment is only approximate. Omitted are HIV-test data for ages below the teens, where reported death rates were too small to be estimated accurately.)

Figure 3

The curves have been “normalized” to about equal peak-heights for readier comparison. The “deaths” distribution is significantly narrower than the “tests” distribution, which is impossible under HIV/AIDS theory. And, once again, the age distribution of deaths peaks at about the same age as the distributions of positive tests: there’s no sign of a latent period or of a drug benefit.

Official statistics on deaths from “HIV disease” are clearly incompatible with the view that “HIV” causes AIDS. Rather, as also argued elsewhere (11, 12), testing “HIV-positive” is merely a non-specific indication of some sort of physiological disturbance, which may not be infection by a pathogen and which is specifically not infection by a retrovirus that destroys the human immune system.

1    How “AIDS Deaths” and “HIV Infections” Vary with Age — and WHY, 15 September 2008;
2    “Disproof of HIV/AIDS theory”, Annual Meeting of the Society for Scientific Exploration, Boulder (CO), June 2008
3    “Incongruous age distributions of HIV infections and deaths from HIV disease: Where is the latent period between HIV infection and AIDS?”, Journal of American Physicians & Surgeons 13 [#3, Fall 2008] 77-81)
4    “Health, United States, 1995”
5    Centers for Disease Control and Prevention, HIV counseling and testing in publicly funded sites; for 1995, published September 1997; for 1996, published May 1998; for 1997 and 1998, published 2001.
6    Centers for Disease Control and Prevention, HIV/AIDS surveillance report; 7 #2 (1995); 8 #2 (1996); 9 #2 (1997); 10 #2 (1998).
7    Centers for Disease Control and Prevention, Current trends estimates of HIV prevalence and projected AIDS cases: Summary of a workshop, October 31-November 1, 1989. Morbidity and Mortality Weekly Report 39 (#7, 1990) 110-2, 117-9.
8    M. H. Merson, Slowing the spread of HIV: agenda for the 1990s, Science 260 (993) 1266-8.
9    M. K. Glynn & P. Rhodes, 2005. Estimated HIV prevalence in the United States at the end of 2003 (2005 National HIV Prevention Conference, Atlanta, GA, June 14) JAMA, 294 (2005) 3076-80.
10    Centers for Disease Control and Prevention. HIV counseling and testing at CDC-supported sites —United States, 1999-2004. 2006: 1-33;
11    The Origin, Persistence and Failings of HIV/AIDS Theory (McFarland 2007)
12    HAART saves lives — but doesn’t prolong them!?, 17 September 2008

Posted in antiretroviral drugs, HIV as stress, HIV does not cause AIDS, HIV tests, HIV varies with age, HIV/AIDS numbers | Tagged: , , , | Leave a Comment »

HAART saves lives — but doesn’t prolong them!?

Posted by Henry Bauer on 2008/09/17

Death rates are down, yet AIDS patients are not living longer! Why not?

(This is a long post, and includes at least one Table that is too large to be viewed conveniently in the same window as the text. If you prefer to read it as a pdf, here it is: haartdoesnt-prolong-lives)

In the early 1980s, a diagnosis of “AIDS” typically had been followed by death within a year or two. At that time, diagnosis was on the basis of Kaposi’s sarcoma or of manifest opportunistic fungal infections — Pneumocystis carinii pneumonia or candidiasis.

Following the adoption of “HIV-positive” as a necessary criterion for an AIDS diagnosis, an increasing range of non-opportunistic infections and other illnesses came to be included as “AIDS-defining” (for instance, tuberculosis, wasting, cervical cancer, etc.) — see Table 1; the most consequential changes were in 1987 and in 1993. The only basis for them was that people with some illnesses were quite often “HIV-positive”, in other words, there were correlations with “HIV-positive” status, not any proof that “HIV encephalopathy”, “HIV wasting disease”, or other additions to the list of “AIDS-defining” conditions were caused by “HIV”. Indeed, there could not be such proof since mechanisms by which “HIV” could cause illness have not been demonstrated, and they remain to this day a matter for speculation — even over the central issue of how HIV (supposedly) kills immune-system cells. An absurd consequence of these re-definitions, often cited by HIV/AIDS skeptics, is that a person suffering indisputably from tuberculosis (say) might or might not be classed as an HIV/AIDS patient, depending solely on “HIV” tests.

Table 1

(from Nakashima & Fleming, JAIDS 32 [2003] 68-85; numbers in parentheses after the dates refer to sources cited in that article)

As “AIDS” was being diagnosed increasingly among people less desperately ill than the original AIDS victims, survival time after diagnosis became longer.

The 1993 change extended the umbrella of “AIDS patient” to cover people with no manifest symptoms of ill health; in ordinary parlance, they weren’t ill, and consequently the interval between an AIDS diagnosis and death was bound to increase dramatically. This re-definition also expanded enormously the number of “AIDS cases”: about 70% of them are not ill (Walensky et al., Journal of Infectious Diseases 194 [2006] 11-19, at p. 16).

In 1996, earlier treatment for AIDS with high-dose reverse transcriptase inhibitors like AZT (ZDV, Retrovir) was increasingly superseded by “highly active antiretroviral treatment” (HAART), which has been generally credited with the prolonging of lives by a considerable number of years. According to the Antiretroviral Therapy Collaboration (Lancet 372 [2008] 293-99), life expectancy for 20-year-old HIV-positives had increased by 13 years between 1996 and 2005 to an additional 49 years; for 35-year-olds, the life expectancy in 1996-99 was said to be another 25 years. According to Walensky et al. (op. cit.), survival after an AIDS diagnosis now averages more than 14 years. Yet another encomium to antiretroviral drugs claims that “by 2004-2006, the risk of death in the first 5 years following seroconversion was similar to that of the general population” (Bhaskaran et al., JAMA 300 [2008] 51-59).

There is general agreement, then, that antiretroviral treatment has yielded substantial extension of life to people already diagnosed with AIDS. The interval between an AIDS diagnosis and death should now be measured in decades rather than a year or two.

As with so many other contentions of orthodox HIV/AIDS belief, however, this expectation is contrary to actual fact. The greatest risk of death from “HIV disease” comes at ages in the range of 35-45, just as at the beginning of the AIDS era. There was no dramatic increase in median age of death after 1996 following the adoption of HAART, see Table 2:

Table 2
Age Distributions of AIDS Diagnoses and AIDS Deaths, 1982-2004
from annual “Health, United States” reports

The slow, steady increase in median ages of AIDS diagnosis and of death shown in Table 2 is pictured in Figure 1, below. The slope of the curve for median age of death shows no pronounced turn upwards following 1996 — even though the annual numbers of deaths decreased by more than half between 1994 and 1998. The somewhat steeper increase in median age of death from 1997 to 1999 and the parallel sharper increase in median age of AIDS diagnosis are both artefacts stemming from re-calculation of numbers under a revised International Diagnostic Code, see asterisked footnote to Table 2. The other slight discontinuity in the curve, around 1993, reflects the CDC’s revised definition of AIDS to include asymptomatic HIV-positive people with low CD4 counts.

Figure 1

The uppermost curve, the interval between median age of diagnosis and median age of death underscores that over the whole course of the AIDS era, no episode brought a significant increase in median age of death, other than the drastic expansion of definition in 1992-93. (Of course, the difference between the median ages for diagnosis and death in any given year cannot be equated with the interval between diagnosis and death for any given individual; the significant point in Figure 1 is just that median ages have changed at a gradual and almost constant rate from the very beginning of the AIDS era. HAART changed the death rate dramatically, but not the ages at which people died.)

This constitutes a major conundrum, a paradox: If HAART has extended life-spans by the claimed amounts, then why has not the median age of death increased dramatically? Why were so many AIDS patients still dying around age 45 in 2004?

The resolution of this conundrum is that the median ages of death are based on actually recorded deaths, whereas the claimed benefits of HAART were calculated on the basis of models incorporating many assumptions about the course of “HIV disease” and relying on contemporaneous death-rates [Science Studies 103: Science, Truth, Public Policy — What the CDC should know but doesn’t, 4 September 2008; CDC’s “model” assumptions (Science Studies 103a), 6 September 2008].

The numbers for total AIDS cases and for deaths, shown graphically in Figure 1, are listed in Table 3. There, column III shows the numbers of survivors in any given year, calculated from the difference between cases and deaths in earlier years plus new cases in the given year. Column IV has the percentage of survivors who died each year.

Table 3
Total AIDS cases, deaths, and
survivors “living with HIV/AIDS”,

From 1996 to 1997, the annual numbers of deaths halved, and of course the percentage of deaths among survivors also halved. Since 1997, only between 2.8 and 5.7% of living “HIV/AIDS” patients have been dying annually, which is in keeping with the claims of life-saving benefits made for HAART on the basis of death rates and computer models. But that conflicts with the age distribution of deaths, which has remained without major change during those same years.

If AIDS patients are now enjoying a virtually normal life-span, who are the people still dying at median age 45? If HAART is saving lives, why aren’t those lives longer?

The reason is that testing “HIV-positive” is actually irrelevant to the cause of death. It does not indicate infection by a cause of illness, it is an indicator analogous to fever. Many conditions may stimulate a positive “HIV” test: vaccination against flu or tetanus, for example; or tuberculosis; or drug abuse; or pregnancy; and many more (Christine Johnson, “Whose antibodies are they anyway? Factors known to cause false positive HIV antibody test results”, Continuum 4 (#3, Sept./Oct. 1996).

The likelihood that any given individual exposed to one of those conditions will actually test positive seems to correlate with the seriousness of the challenge to health; and it varies in a predictable manner with age, sex, and race (The Origin, Persistence and Failings of HIV/AIDS Theory). In any group of people, those who test “HIV-positive” are more likely to be or to become ill, so they are also more likely to die than those who do not test positive: just as in any group of people, those who have a fever are more likely to be ill and to die than those who do not have a fever. Also, of course, a fever does not necessarily presage death, nor does “HIV-positive” necessarily presage death; and in any group of people, some will die who never tested positive or who never had a fever. There’s a strong correlation between illness, death, and fever, but it’s not an inevitable one and fever is not the causative agent; there’s a strong correlation between illness, death, and “HIV-positive”, but it’s not an inevitable one and “HIV” is not the causative agent.

So: Among people “living with HIV/AIDS”, those who happen to die in any given year are simply ones whose “HIV-positive” status was associated with some actually life-threatening illness; and their ages were distributed just as ages are distributed in any group of “HIV-positive” people, with a median age at around 40, with minor variations depending on race and sex. For example, in 2000, there were more than 350,000 people “living with HIV/AIDS” (Table 3) whose median age was somewhere around 39.9 (Table 2: 39.9 was the median age of new diagnoses in that year. Survivors from the previous year , when the median age had been 39.4, would have had a median age — one year later — somewhere between 39.4 and 40.4; not as much as 40.4, because those dying in 1999 had a higher median age than those who didn’t die.) Of the 350,000 in 2000 with median age 39.9, 3.9% (14,457, Table 3) died; and the median age of those dying was 42.7. It’s only to be expected, of course, that — among any group of people at all — those who die have a somewhat higher average age than those who don’t die in that year.

The rate of death among “HIV/AIDS” patients declined markedly from 1987 to 1992 simply because “HIV/AIDS” was being increasingly defined to include illnesses less life-threatening than the original AIDS diseases of Kaposi’s sarcoma and established opportunistic fungal infections. Another sharp drop in death rates came after 1992 when people who were not even ill came to be classed as “HIV/AIDS” patients and comprised about 70% of such patients. The last sudden drop in death rates, with the introduction of HAART in 1996, resulted not from any lifesaving benefit of HAART but because the latter superseded the earlier, much more toxic, high-dose regimens of AZT. The supposed benefits of HAART are to decrease viral load and allow CD4 counts to rise; but these effects come slowly and cannot explain a sudden improvement in clinical condition sufficient to bring a halving of deaths from one year to the next; on the other hand, stopping the administration of a highly toxic substance can certainly bring numbers of deaths down immediately. These data indicate, therefore, that something like half (at least) of “HIV/AIDS” deaths from 1987 through 1996 — some 150,000 — are attributable to the toxicity of AZT.

Through all those drastic as well as slower changes in death rates, among those “HIV/AIDS patients” who died for any one of a large variety of reasons, the median age of the “HIV-positive” ones remained about the same as it had always been. “HIV/AIDS” patients are not living longer despite the change in death rate from an annual 60% or more to 3% or less.

As I said in a previous post [How “AIDS Deaths” and “HIV Infections” Vary with Age — and WHY, 15 September 2008], this paradox follows “from the manner in which HIV tests were designed and from the fact that AIDS was defined in terms of ‘HIV’”. The genesis of the tests has been described lucidly by Neville Hodgkinson (“HIV diagnosis: a ludicrous case of circular reasoning”, The Business, 16/17 May 2004, pp 1 and 4; similar in “The circular reasoning scandal of HIV testing”, thebusinessonline, 21 May 2006):

“It never proved possible to validate the [HIV] tests by culturing, purifying and analysing particles of the purported virus from patients who test positive, then demonstrating that these are not present in patients who test negative. This was despite heroic efforts to make the virus reveal itself in patients with Aids [sic, British usage] or at risk of Aids, in which their immune cells were stimulated for weeks in laboratory cultures using a variety of agents.
After the cells had been activated in this way, HIV pioneers found some 30 proteins in filtered material that gathered at a density characteristic of retroviruses. They attributed some of these to various parts of the virus. But they never demonstrated that these so-called ‘HIV antigens’ belonged to a new retrovirus.
So, out of the 30 proteins, how did they select the ones to be defined as being from HIV? The answer is shocking, and goes to the root of what is probably the biggest scandal in medical history. They selected those that were most reactive with antibodies in blood samples from Aids patients and those at risk of Aids.
This means that ‘HIV’ antigens are defined as such not on the basis of being shown to belong to HIV, but on the basis that they react with antibodies in Aids patients. Aids patients are then diagnosed as being infected with HIV on the basis that they have antibodies which react with those same antigens. The reasoning is circular.”

“HIV” tests were created to react most strongly to substances present in the sera of very ill gay men whose average age was in the late 30s (Michelle Cochrane, When AIDS began: San Francisco and the making of an epidemic, Routledge, 2004; cited at pp. 188-92 in The Origin, Persistence and Failings of HIV/AIDS Theory). That’s why people who are in some manner health-challenged are more likely than others to test “HIV-positive”, especially if they are aged around 40. Evidently the particular molecular species picked up by “HIV” tests are generated most prolifically around age 40, especially under the stimulation of various forms and degrees of physiological stress. That’s why the median ages for testing “HIV-positive” and for being diagnosed with AIDS (criterion: positive HIV test) and for dying from HIV/AIDS  (criterion: positive HIV test) are all the same, in the range 35-45.

Perhaps some of what “HIV” tests detect are so-called “stress” or “heat-shock” proteins. That gay men so often test “HIV-positive” might have to do with molecular species associated with “leaky gut syndrome” or other consequences of intestinal dysbiosis [What really caused AIDS: slicing through the Gordian knot, 20 February 2008].

Those are speculations, of course. What is not speculative, however, is that HAART does not prolong life* even as it lowers death rates. It is also clear that testing “HIV-positive” is no more than an indicator of some form of physiological challenge, not necessarily infection by a pathogen and specifically not infection by a retrovirus that destroys the human immune system.

Even as it is obvious that HAART does not prolong life on the average, there are reliable testimonies that individuals have experienced clinical improvement on HAART, often dramatic and immediate. But, again, such immediate benefit cannot be the result of antiretroviral action, and likely reflects an antibiotic or anti-inflammatory effect, as suggested by Dr. Juliane Sacher [Alternative treatments for AIDS, 25 February 2008].

Posted in antiretroviral drugs, HIV and race, HIV as stress, HIV does not cause AIDS, HIV tests, HIV varies with age, HIV/AIDS numbers | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 5 Comments »

How “AIDS Deaths” and “HIV Infections” Vary with Age — and WHY

Posted by Henry Bauer on 2008/09/15

According to orthodox HIV/AIDS belief, antiretroviral treatment has greatly extended the lifespan of “HIV-positive” individuals. It follows that the ages at which people typically die of “HIV disease” should have increased since AIDS was first noted in the early 1980s. Yet the greatest risk of dying “from HIV disease” remains just about the same as it was two decades ago: around age 40 ± 5 years. There is no sign that antiretroviral drugs have extended life.

Another shibboleth of HIV/AIDS theory is that infection by HIV is followed by a latent period averaging 10 years before symptoms of illness present themselves; and this pre-symptomatic period is supposed to have been lengthened by contemporary antiretroviral treatment. It follows that the ages at which people die from “HIV disease” should be much greater than the ages at which they become “infected”. Yet the ages at which people most often test “HIV-positive” are the same as the ages at which people are most likely to die of “HIV disease”, in the range of 40 ± 5 years.  There is no indication of a latent period, nor that antiretroviral drugs have extended it.

There are dramatic differences in frequency of testing HIV-positive between members of different racial groups: black >> Native American > white > Asian. There are similarly dramatic differences, of similar magnitude, in the respective death rates. Yet the variations with age are incongruous: blacks survive HIV/AIDS disease to significantly greater ages than do whites, Native Americans, or Asians. An infectious disease that targets some racial groups more than others also allows members of the most affected groups to survive longer?!

I first noted these points in a blog [“HIV disease” is not an illness, 19 March 2008], and later in a talk to the Society for Scientific Exploration (“Disproof of HIV/AIDS theory”). A journal article making these points has now been published (“Incongruous age distributions of HIV infections and deaths from HIV disease: Where is the latent period between HIV infection and AIDS?”, Journal of American Physicians & Surgeons 13 [#3, Fall 2008] 77-81).  The data showing that “HIV infections” peak in the same age range as deaths from “HIV disease” are in Table 5 of the journal article and Table E of the blog post:

and a graphical representation is in the PowerPoint presentation:

It is an additional curiosity that the greatest risk of dying from an infectious disease should be for people who are in prime years of adulthood — infections are typically most dangerous for the very young and for the elderly, those for whom childhood vaccinations and flu and pneumonia vaccinations are most recommended.

In the limited space of the journal article and the limited time of an oral presentation, I had to leave unaddressed a number of complications and corollary points, which I’ll address in subsequent posts, for instance, that the age of first recorded positive HIV-test is not the same as the age of first becoming HIV-positive.

Perhaps the most interesting and consequential fact is that the age distributions for positive HIV-tests, for AIDS diagnoses, and for HIV/AIDS deaths all peak in this age range of about 35-45 and have roughly the same shape. That makes no sense under HIV/AIDS theory, but makes perfect sense if “HIV-positive” is merely a response to some non-specific physiological challenge. The concordance of these three age distributions, over the course of more than two decades, follows from the manner in which HIV tests were designed and from the fact that AIDS was defined in terms of “HIV”.

Posted in antiretroviral drugs, HIV and race, HIV as stress, HIV does not cause AIDS, HIV tests, HIV varies with age, HIV/AIDS numbers | Tagged: , , , | Leave a Comment »

The Research Trough — where lack of progress brings more grants

Posted by Henry Bauer on 2008/09/10

Erwin Chargaff wrote wonderfully acerbic essays about the gap between the traditional high ideals of science and the actual practices of scientists, for example, “in our time a successful cancer researcher is not one who ‘solves the riddle,’ but rather one who gets a lot of money to do so. It is all quite similar to the history of alchemy, another truly goal directed, though much less costly, enterprise” (Chargaff, Voices in the Labyrinth, 1977, p. 89).

What might Chargaff have said about the goal-directed missions of trying to invent vaccines and microbicides to prevent infection by HIV?

He would surely have expressed it much more memorably, but the gist would have been, “I told you so”:

NORFOLK, Va. – Eastern Virginia Medical School is receiving a $100 million grant to develop a product to prevent the transmission of the virus that causes AIDS. . . . Officials say the grant will further two decades of studying microbicides that would block HIV and other sexually transmitted diseases. Microbicides can come in forms such as topical gels, creams, tablets, films or oral pills. The grant will fund the school’s program known as CONRAD. . . . CONRAD researchers have been working on microbicides for 20 years and are focusing on several promising candidates that interfere with the process that allows HIV to replicate” [AP, 8 September 2008; emphases added].

That’s progress for you: after two decades, “some promising candidates”. Note how misleading is the stuff about “topical gels, creams, tablets, films or oral pills”, implying that it’s the vehicle that needs work when the very feasibility is at issue since an effective agent remains to be discovered.

Luddites like myself might suggest that this $100 million throws good money after bad. Naïve observers might ask whether there’s something basically wrong with our view of “HIV”, if twenty years has brought nothing better than some “promise”.

Here’s a short and random recent history of HIV-microbicide research:

Microbicide Trials On HIV Transmission Prevention Halted — The Chronicle Newspaper (Lilongwe) . . . 7 May 2007 . . . Malawi will continue with phase 3 Trials on the efficacy of a microbicide gel that is being tested for HIV prevention in women despite trials of a similar kind being halted in other participating countries. . . CONRAD, a reproductive health research organization had halted the phase 3 efficacy trials of its Cellulose Sulfate (CS) based microbicides . . . the public [is] asking why the trials . . . being carried out by John Hopkins Foundation in Malawi are still continuing. . . . on the Pro 2000 and Buffer Gel [trials] started in 2005 . . . . CS is a completely different product from Pro 2000 and Buffer Gel . . . preliminary results indicated that Cellulose Sulfate could lead to an increased risk of HIV infection . . . . ‘With these microbicide candidates in large scale efficacy trials and a new generation of microbicides well into safety studies, microbicides could be available in five to seven years’” .
That reminded me that an HIV vaccine, promised in 1984 within a couple of years, has not been delivered after more than a couple of decades.

FDA Mandates HIV Warning on Contraceptives — Contraceptive gels, foams, films, and inserts sold in the United States will now come with a warning that the products do not protect against HIV and other sexually transmitted diseases. The Food and Drug Administration will require the warning on all over-the-counter products containing nonoxynol-9 . . . . ‘FDA is issuing this final rule to correct the misconceptions that the chemical N-9 in these widely available stand-alone contraceptive products protects against sexually transmitted diseases, Janet Woodcock, FDA’s deputy commissioner for scientific and medical programs, said . . . . The warning was proposed in 2003 after a study in Africa and Thailand found women using the nonoxynol-9-based products were at higher risk of HIV than those on a placebo. The new warning states that because the products can irritate the vagina and rectum they may boost the risk of contracting HIV/AIDS” [emphases added].
Four years between proposing a warning and actually issuing it seems a bit long even for a federal bureaucracy; especially one that’s accustomed to approve new antiretroviral drugs virtually overnight.

Pfizer Seeks to Prevent HIV” — Wall Street Journal 30 January 2008 — “A new Pfizer Inc. HIV drug will soon be reformulated in an effort to prevent the transmission of the virus, offering a faint ray of hope in an arena littered with disappointments. . . . [Pfizer] will license its new medicine, Selzentry, to a nonprofit that investigates ways to turn HIV medicines for infected patients into vaginal substances to prevent transmission to women during sex. The partnership offers a low-risk way for Pfizer to find out if the medicine could become a frequently taken drug, while potentially offering an empowering concept to women in the developing world.  HIV preventives have proven elusive, with researchers and advocates still recovering from last year’s collapse of Merck & Co.’s once-promising vaccine trial. And Pfizer’s new venture with the International Partnership for Microbicides is a long shot that relies on an unproven theory. . . Pfizer’s drug was approved last year for patients who have undergone other HIV treatment. Pfizer is now giving the IPM a license to try to turn the medicine into a vaginal gel, ring or film that might prevent transmission. The Pfizer drug already has a safety portfolio approved by the Food and Drug Administration, potentially making it easier to get through testing in a new form.”

Re “approved safety profile”, note for Selzentry (equivalent generic is maraviroc, MVC) the following “Adverse Events” from the HIV/AIDS Treatment Guidelines, 29 January 2008: “Abdominal pain, cough, dizziness, musculoskeletal symptoms, pyrexia, rash, upper respiratory tract infections, hepatotoxicity, orthostatic hypotension” (Table 14, p. 74).
There’s also a “Pertinent Black Box Warning” (Table 20, p. 86):
Hepatotoxicity has been reported with maraviroc and may be preceded by evidence of a systemic allergic reaction (e.g., pruritic rash, eosinophilia, or elevated IgE). . . . Immediately evaluate patients with signs or symptoms of hepatitis or allergic reaction.”
The “GOOD” news about MVC (Table 26, p. 101), is that it doesn’t seem to cause cancer in animals.

“The first anti-AIDS vaginal gel to make it through late-stage testing failed to stop HIV infection in a study of 6,000 South African women” — AP, 18 February 2008 — “Scientists . . .plan more tests on a revamped gel containing an AIDS drug that they hope will work better. The gel used in the current study did prove safe, however, and researchers called that a watershed event.”
How Chargaff would have been delighted at this grist for his mill: it’s a watershed event when, finally, an intended medicine at least does no harm.
But the researchers were quite rightly delighted, because “A year ago, scientists stopped two late-stage tests of a different gel after early results suggested it might raise the risk of HIV infection instead of lowering it. . . . The study was paid for by the Bill & Melinda Gates Foundation and the U.S. Agency for International Development . . . . Jeff Spieler, an official at USAID, called the trial ‘groundbreaking work’ in a statement. ‘We have always known that the path to developing a successful microbicide would be a long one.’ The Population Council hopes to start tests this year of a revamped Carraguard containing an experimental AIDS drug, MIV-150. The group also has studies under way of a contraceptive version of the gel, Carraguard plus hormones.”
Sounds very good. Plenty of research needed, so grants will keep coming in for the “long” foreseeable future.

26th  February 2008: “CHICAGO (AFP) — The quest to develop a vaginal gel to prevent HIV infection took a step forward Monday when researchers announced that one such gel is safe [cheers!] for women to use on a daily basis. . . . The researchers found no disruption of liver, blood or kidney function . . . . ‘Based on what we have learned we can proceed with greater confidence on a path that will answer whether tenofovir gel and other gels with HIV-specific compounds will be able to prevent sexual transmission of HIV in women when other approaches have failed to do so,’ said lead investigator Sharon Hillier, director of reproductive infectious diseases at the University of Pittsburg School of Medicine.”
“The announcement comes a week after researchers announced that the first prototype to complete advanced clinical trials was ineffective in preventing infection. Microbicides are one of the most eagerly-sought avenues in the war on AIDS, where at present there is neither a cure nor a vaccine . . . . A number of different gels are currently being tested around the world but none have been proven to be effective and some have even increased the risk of contracting HIV.”
As to tenofovir (Viread; also in Atripla and Truvada), the Treatment Guidelines say:
Renal impairment, manifested by increases in serum creatinine, glycosuria, hypophosphatemia, and acute tubular necrosis, has been reported with tenofovir use . . . . The extent of this toxicity is not completely defined. . . . Renal function, urinalysis, and electrolytes should be monitored in patients while on tenofovir” (p. 23);
Adverse Events (Table 10, p. 69): “Asthenia, headache, diarrhea, nausea, vomiting, and flatulence; renal insufficiency; Lactic acidosis with hepatic steatosis (rare but potentially life-threatening toxicity with use of NRTIs).
Pertinent Black Box Warning (Table 20, p. 86): “Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of nucleoside analogs alone or in combination with other antiretrovirals. Tenofovir is not indicated for the treatment of chronic hepatitis B (HBV) infection; safety and efficacy in patients with HIV/HBV coinfection have not been established. Severe acute exacerbations of hepatitis B have been reported in patients who discontinued tenofovir — hepatic function should be monitored closely with both clinical and laboratory follow-up for at least several months after discontinuation of tenofovir in HIV/HBV-coinfected patients. If appropriate, initiation of anti-HBV therapy may be warranted after discontinuation of tenofovir.”
Tenofovir has also caused liver cancers in mice.
Since microbicides are intended for use by women, yet another comment in the Treatment Guidelines is pertinent:
“Because of lack of data on use in human pregnancy and concern regarding potential fetal bone effects, tenofovir should be used as a component of a maternal combination regimen only after careful consideration of alternatives” (Table 27, p. 102).

Though the drugs had been approved as safe and effective by the FDA, the label for Selzentry and information about tenofovir make rather frightening reading.

Posted in clinical trials, experts, Funds for HIV/AIDS, HIV skepticism, HIV transmission, sexual transmission, uncritical media, vaccines | Tagged: , , , , , , , , , , , , , | 3 Comments »

CDC’s “model” assumptions (Science Studies 103a)

Posted by Henry Bauer on 2008/09/06

An earlier post showed that the CDC’s model of HIV/AIDS is simply wrong; it yields estimates of HIV infections that are contrary to repeatedly published data for at least the first decade of the AIDS era. A wade through that article describing the CDC’s latest estimates (Hall et al., “Estimation of HIV incidence in the United States”, JAMA, 300 [2008] 520-529) is less than reassuring about what CDC does and how it does it, albeit enlightening in a head-spinning journey-through-wonderland sort of way.

To take a relevant non-technical matter first:
“the CDC determined in 2005 and again in 2007 that HIV incidence surveillance is not a research activity and therefore does not require review by an institutional review board” — at least in part, presumably, because this is “routine HIV and AIDS surveillance.”
That determination was based on specified rules and regulations and evidently satisfied bureaucratic requirements, but it’s nevertheless nonsense, in an important way. When something is described as “research”, most everyone understands that there’s a degree of uncertainty attached to the output. When, on the other hand, something is NOT research, just routine surveillance, then there’s a clear implication that the outputs are factual and trustworthy. Slogging through the details of how the calculations are made shows quite convincingly, however, that one would be foolish to place much reliance on any resulting claims — even leaving aside that, as shown earlier, those outputs are at variance with published data from official and peer-reviewed sources stretching over more than a decade.
Why not have an institutional review board look at this activity? Well, perhaps such a review would consider the associated ethical issues, since human subjects are involved. Have they given informed consent? What are the consequences for a person who is told that an HIV infection is not only present but happens to be recent? How would it affect that person’s intimate relations? And so on. A bag of worms, best left unopened. You never know, no matter how carefully you choose members for such review boards, a troublemaker might slip through the vetting process.


The article’s conclusions imply a degree of certainty that’s entirely unwarranted:
“This study provides the first direct estimates of HIV incidence in the United States using laboratory technologies previously implemented only in clinic-based settings.”
What does “direct” estimates seek to convey, if not trustworthiness? Yet those estimates are anything but direct, given the avalanche of assumptions that go into those estimates.

The rationale for this research-that-isn’t-research is that “the incidence of HIV infection in the United States has never been directly measured”. True, because it couldn’t be, since acquiring “HIV infection” brings no symptoms with it. However, there have been multitudes of direct measurements of HIV prevalence; and that, together with deaths from AIDS whose reporting is legally mandated, permits calculation of incidence. As shown in the earlier post, those actual calculations demonstrate that these new “direct estimates of incidence” are dead wrong.

The crucial mistake in CDC’s models is, of course, the assumption that HIV causes AIDS. That leads to the further assumption that the incidence of HIV can be “back-calculated” from the incidence of AIDS diagnoses. Even were the first assumption correct, back-calculation would require everything to be known about the course of “HIV disease” following infection. Given that there must be individual differences, and that any one of some 30 diseases or conditions might be the manifestation of “HIV disease”, that’s impossible; therefore, another avalanche of interlocking assumptions blankets the model.

These considerations in themselves ought to be enough to vitiate the whole approach, but yet more assumptions are piled on. Possibly the most critical is the “new method” for determining whether infections are recent or not. The basic concept was described (for example) ten years ago in Janssen et al., “New testing strategy to detect early HIV-1 infection for use in incidence estimates and for clinical and prevention purposes”, JAMA 280 (1998) 42-8: it’s assumed that recent infections will be detectable by a sensitive antibody test and less recent ones will be detectable by a less sensitive antibody test. It’s long been accepted that it takes a matter of weeks or months after infection before tests can pick up HIV antibodies; so, the idea is, the levels of antibodies increase at not too rapid a rate, and using simultaneous sensitive and less sensitive assays can distinguish relatively new from relatively old infections. (Analogous earlier suggestions include Brookmeyer et al., American Journal of Epidemiology 141 [1995] 166-72 and Parekh et al., AIDS Research and Hum Retroviruses 18 [2002] 295-307.)

I invite — no, I urge interested parties to read the Janssen et al. paper. I can’t post the whole thing since it’s copyrighted by the American Medical Association, but here’s a “fair use” extract to give a taste of the proliferation of approximations and assumptions:

“We estimated distribution and mean time between seroconversion on the 3A11 assay and the 3A11-LS assay using a mathematical model . . . with a variety of cutoffs. To estimate time between seroconversion on the 2 assays, we assumed a progressive increase in antibody during early infection, producing for each subject a well-defined time on each assay before which results would be nonreactive and after which results would be reactive [but bear in mind that HIV antibody tests don’t give a definitive “yes/no” — a “well-defined time” was CHOSEN]; seroconversion time on the 3A11 assay was uniformly distributed [that is, the assumption of uniform distribution was made part of the model] between time of the last 3A11 nonreactive specimen and the time of the first 3A11 reactive specimen; 3A11-LS assay seroconversion occurred no earlier than 3A11 assay seroconversion [assumption: the less sensitive test could not be positive unless the more sensitive one was]; and time difference between seroconversion on the 3A11 and 3A11-LS assays was [assumed to be] independent of seroconversion time on the 3A11 assay. We modeled time between seroconversions using a discrete distribution that assigned a probability to each day from 0 to 3000 days, estimated by maximum likelihood based on observed data on times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays, using an EM algorithm approach.29 A smoothing step was added to the algorithm30 to speed convergence and produce smooth curves; a kernel smoother with a triangular kernel was used with bandwidth (h) of 20 days. Mean times between 3A11 and 3A11-LS sero- conversion were largely invariant for the range of days for smoothing bandwidths we considered (0#h#100). Confidence intervals (CIs) for mean time between seroconversions were obtained using the bootstrap percentile method.31 Day of 3A11 assay seroconversion was estimated from the model conditional on observed times of last nonreactive and first reactive results for 3A11 and 3A11-LS assays and using estimated distribution of times between seroconversions. To assess ability of the testing strategy to accurately classify specimens obtained within 129 days of estimated day of 3A11 seroconversion and to correct for multiple specimens provided by subjects, we calculated the average proportion of each person’s specimens with 3A11 reactive/ 3A11-LS nonreactive results obtained in that period” [emphases added].

Now, I’m not suggesting that there’s anything untoward about RESEARCH along these lines; quite the contrary, it’s commendable that researchers lay out all the assumptions they make so that other researchers can mull over them and decide which ones were not good and should be modified, as work continues in the attempt to develop an adequate model. What’s inappropriate is that the outputs of such highly tentative guesswork morph over time into accepted shibboleths. The CDC’s recent revision of estimates accepts as valid this approach even while admitting that it had been found to give obviously wrong results in Africa and Thailand, namely, “the misclassification of specimens as recent among persons with long-term HIV infection or AIDS, which overestimates the proportion of specimens classified as recent”. Outsiders might draw the conclusion that there’s something basically wrong and that the approach needs refining; certainly before it gets applied in ways that lead to public announcements that spur politicians into misguided action, say, that medical insurance be required to cover the costs of routine HIV tests.   (Researchers, on the other hand, merely note such failures and press on with modifications that might decrease the likelihood of misleading results.)

So: Hall et al. begin with the assumption that HIV cause AIDS. They add the corollary that HIV incidence can be back-calculated from AIDS diagnoses, which requires additional assumptions about the time between HIV infection and AIDS — not just the average “latent period”, but how the latent period is distributed: is it a normal bell-curve distribution around a mean of 10 years? Or is it perhaps a Poisson distribution skewed toward longer times? Or something else again? The fact that the precise time of infection cannot be determined, only estimated on the basis of yet further assumptions, makes this part of the procedure inherently doubtful.

Heaped on top of these basic uncertainties are more specific ones pertaining to the recently revised estimate of HIV infections for the whole United States. The data actually used came from only 22 States. Of an estimated 39,400 newly diagnosed HIV-positives in 2006, 6864 were tested with the assay that had proved unreliable in Africa and Thailand, and 2133 of these were classified as recent infections, which led by extrapolation to an estimated 56,300 new infections in 2006 in the United States as a whole.

This 2008 publication asserts that the Janssen et al. approach “now makes it possible to directly measure HIV incidence”, citing articles published in 1995, 1998, and 2002. It refers to “new technology” and a “new system”, citing the 2002 article in conjunction with “R. H. Byers, PhD, unpublished data, July 2005”. A further assumption is the criterion that “a normalized optical density of less than 0.8 on the BED assay . . . [means that] the source patient is considered recently infected”. This hodge-podge is made to appear scientifically reliable by christening it “the serologic testing algorithm for recent HIV seroconversion (STARHS)”, citing Janssen et al. (published in 1998, remember).

The public call-to-arms about 56,300 new infections was based on this STARHS approach, fortified by an “extended back-calculation” yielding 55,400 infections per year during 2003-6, the back calculation being based on “1.230 million HIV/AIDS cases reported by the end of 2006”.

Once again: Researchers can be properly pleased when two approaches yield closely the same result, 56,300 and 55,400. It means that what they’re doing is self-consistent.

But self-consistent doesn’t mean correct, true to reality. Outsiders might note, however, and policy makers badly need to note, that both approaches are based on the same basic assumptions, namely, that HIV entered the USA in the late 1970s and that HIV causes AIDS. But those assumptions are at glaring odds with a number of facts.

For one, the report that first led me to look at HIV-test data: that in the mid-1980s, teenaged females from all around the country were testing HIV-positive at the same rate as their male peers. In other words, a sexual infection that got its foothold around 1980 among gay men and shortly thereafter in injecting drug users had, within a few years, become distributed throughout the whole United States to the stage that teenagers planning to go into military service, and therefore rather unlikely to have been heavily into drug abuse or unsafe sex with gay men in large cities, would have already caught this lethal bug. Not only that: although this infectious disease-causing agent was already so pervasively distributed around the country, the disease itself was not.

That early publication (Burke et al., JAMA 263 [1990] 2074-7) also reported that the greatest prevalence of HIV-positive was NOT in the places where AIDS was most to be found; the male-to-female rates of HIV-positive were nothing like those for AIDS; and testing HIV-positive was more likely for black youngsters from regions with little AIDS than for white youngsters from regions with much AIDS.

No more should have been needed, one might well suggest, to scotch once and for all the mistaken connection between AIDS and HIV-positive. Instead, we are now inundated in houses of cards held together by a proliferation of assumptions modified ad hoc, all preventing research on the really pressing matters:

What does testing HIV-positive mean in the case of each individual? What should people do, who are told they are HIV-positive? What is the best treatment for people presenting with opportunistic infections?

Posted in experts, HIV absurdities, HIV and race, HIV does not cause AIDS, HIV risk groups, HIV skepticism, HIV tests, HIV transmission, HIV/AIDS numbers, M/F ratios, sexual transmission | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments »