HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

No HIV “latent period”: dotting i’s and crossing t’s

Posted by Henry Bauer on 2008/09/21

“How ‘AIDS Deaths’ and ‘HIV Infections’ Vary with Age — and WHY” [15 September 2008] cited a blog post (1), an oral presentation (2), and a journal article (3) offering evidence that disproves central aspects of HIV/AIDS theory, namely, that there’s a “latent period” of roughly 10 years between “infection by HIV” and symptoms of illness and that the length of this latent period and the time from illness to death have been greatly extended by antiretroviral drugs, particularly since 1996 and HAART (“highly active antiretroviral treatment”).

According to HIV/AIDS belief, there should now be at least a couple of decades, on average, between infection and death. The data presented earlier showed that the age at which testing positive is most common is indistinguishable from the age at which death is most common —within the resolution or precision of the data, which were available in the published sources only for non-overlapping 10-year ranges (25-34, 35-44, etc., and  20-29, 30-39, etc.). The associated uncertainty of 5-10 years is immaterial when it’s a matter of detecting a claimed interval that exceeds a couple of decades, so I had left unaddressed a number of complications and corollary points. Discussing those can serve to underscore the strength of this line of argument.

Perhaps the most obvious questions to address:
1. Do the data from HIV tests and about deaths come from comparable population samples? Are the respective rates calculated on the same basis?
2. The age of first recorded positive HIV-test is not the same as the age of first becoming “infected”, i.e. HIV-positive.

1. Are the results from HIV tests comparable directly to the death statistics?
Though test results expressed as rates of testing positive may seem comparable to rates of death, there’s an important difference. If 1% of those in a given age-range of some group test positive in, say, 1985, that rate is proportional to the number of individuals who test positive and therefore can be compared directly to the rate in the same age-range in a different year — 1990, 2000, whatever —, or to the rates in other age-ranges. However, one can’t compare death rates in the same manner, because those are reported (typically per 100,000) for the specific age-range in the whole population in a given year and therefore depend on the distribution of ages within the population and how it changes over time; for example, in 1985 there were 40,000 15-24-year-olds, 41,700 25-34-year-olds, and 31,700 35-44-year-olds, whereas in 1993 the numbers were respectively 36,000, 41,900, and 40,800 (4). When the only concern is the age range in which the rate of testing positive or of death reaches a maximum, this complication is not important because the numbers in the relevant age ranges — between 15 and 45 — are not vastly different and those differences hardly impinge on the much greater variations in the death rates. But if one wants to compare the whole age distributions and not just the ages at which a maximum occurs, then one ought to use numbers of deaths and not rates.

A second question about these comparisons stems from the fact that the death statistics refer to the United States as a whole, whereas the HIV-tests were carried out at a variety of public testing sites, they were not an all-encompassing national survey: about 10 million tests in the United States were recorded for 1995-98 from clinics for drug treatment, family planning, venereal diseases, and tuberculosis, and from pre-natal or obstetric clinics, HIV counseling and testing sites, prisons, colleges, miscellaneous health departments, and private doctors (5). About 143,000 of those tests were positive during these 4 years, in other words, about 36,000 annually — though some may have been repeated positives already reported in an earlier year. Now, independent CDC Surveillance Reports for these same 4 years (6) recorded 56,000 newly diagnosed infections from the 32 States with confidential HIV reporting, 14,000 annually; extrapolated to all 50 States, there would have been very approximately 85,000 to 90,000 such reports, say 22,000 annually. Thus the numbers of positive tests from the public sites exceeded the national report of new diagnoses by about 65% (36/22 = 1.64); so even if as many as 40% (14/36) of the 143,000 positive tests at public sites had been repeat tests, they would still represent a very good sample of all newly reported positive tests in the United States during those years and would therefore be make an appropriate comparison with deaths for the whole United States.

Furthermore, a significant proportion of those tested at the public sites are likely to represent people most at risk, which would also be those among whom most of the “HIV disease” deaths could be expected to come. The 143,000 positive tests from 10 million tests at the public sites correspond to a rate of about 1.4%. For the United States as a whole, between 1989 and 2005 the estimated prevalence of HIV-positives was about 1 million, a rate of about 0.4% (7-9). For 2003, the CIA Fact Book gives 0.6%. Thus the people tested at the public sites were at higher risk of HIV than the general population, by a factor of between 2 and 4.

Altogether, then, the positive tests recorded in 1995-98 at public sites make an appropriate comparison with deaths from HIV disease in subsequent years.

2. The age of first recorded positive HIV-test is not the same as the age of first becoming HIV-positive.

In (1-3), ages of death were compared with ages of testing HIV-positive. But those are not the ages of initial infection. Numbers for “first positive test” at each age represent some combination of infections incurred during some preceding period: to the numbers for each age of “first positive test” there corresponds a distribution of actual infections at younger ages. The age distribution of first infection therefore begins at earlier ages than the first-positive-test distribution; however, it ends at the same age since first positive test cannot precede infection; therefore, the age distribution of infections is broader, covering a wider range of ages than the age distribution of positive tests (Figure 1). Furthermore, the average positive test will come no later than the end of the average latent period — that is, when symptoms of illness appear — so the age range for infections will be something like five-to-ten years broader (at the base) than the age distribution of tests. (This is again of the same order of magnitude as the uncertainty in comparing non-overlapping 10-year intervals, and was immaterial when testing the claim that deaths should have been shifted to greater ages by at least a couple of decades.)

The age at which antiretroviral treatment begins cannot precede the age of first positive test, so Figure 1 shows deaths also beginning no earlier; but this is perhaps overly conservative, since only HIV-positives with low CD4 counts, or already manifesting illness, will begin treatment at the time of first positive test. It seems reasonable to assume that the numbers of “HIV-disease” deaths before a first positive test is negligibly small.

Now, for each age of beginning treatment, there corresponds a range of expected ages of death, centered on the average life expectancy. A few people are even likely to die soon after treatment begins, but about half will survive longer than the average expectancy. Therefore the age distribution of deaths will be very much broader than the distribution of infections, which is itself broader than the distribution of first positive tests. Here’s a purely schematic illustration (in actuality, these curves are almost certainly not bell-shaped):

Figure 1
Schematic representation of expected relationships
between infection, positive HIV-test, and death

Even this is a simplified view, however, because it considers only deaths of people who tested positive in a particular year. Actually, deaths in any given year will stem from HIV-positive individuals who had been infected during a wide range of years. Could this complication nullify the expectations that the age distribution of deaths must be broader than the distribution of infections with the peak age for deaths shifted by a couple of decades from the peak age of infections?

No.

For the age distribution of deaths stemming from a given cohort “H” (people infected in a given year) to be significantly distorted by deaths from earlier cohorts “A, B, C, . . . ” , the numbers of deaths from those earlier cohorts would have to be comparable to or greater than the deaths among cohort H and would have to be distributed in age in some radically different manner. Yet year after year, the age distribution of deaths has changed very little in shape; see Figures 2a and 2b, which are based on the data in Table 2 of an earlier post.

Figure 2a
Age Distributions of “AIDS” Deaths, 1982-2002

Numbers are correct for all years except 1982; for the latter, divide by 100.

The very small differences in shape of the whole distribution is more obvious in Figure 2b where the curves are of more nearly equal size.

Figure 2b
Age Distributions of “AIDS” Deaths
normalized to comparable scales

(Not all years are shown in Figure 2b because the curves overlap so much. The stated widths were all measured at half the height of each curve, not at the places where they are shown here. At half peak-height, the widths of all the curves is between 17 and 23 years.)

Not only the death distributions are little changed over the years, so also are data from public testing sites. A summary for the years 1999-2004 (10) reports an overall HIV-positive rate of 1.43%, from >11 million tests, the same as for 1995-98; and the shape of the distribution of positive test by age is very similar, see Figure 3. The ages for which the 1999-2004 data are given cover a greater range than in the 1995-98 reports, however, making an even better comparison with deaths, so both are shown in Figure 3, which compares HIV-positive reports with deaths for 2004. (The highest age for which HIV tests were reported in 1995-98 is ≥50, in 1999-2004 it is ≥75; the lines to those points are broken to indicate that the angle of this segment is only approximate. Omitted are HIV-test data for ages below the teens, where reported death rates were too small to be estimated accurately.)

Figure 3

The curves have been “normalized” to about equal peak-heights for readier comparison. The “deaths” distribution is significantly narrower than the “tests” distribution, which is impossible under HIV/AIDS theory. And, once again, the age distribution of deaths peaks at about the same age as the distributions of positive tests: there’s no sign of a latent period or of a drug benefit.

Official statistics on deaths from “HIV disease” are clearly incompatible with the view that “HIV” causes AIDS. Rather, as also argued elsewhere (11, 12), testing “HIV-positive” is merely a non-specific indication of some sort of physiological disturbance, which may not be infection by a pathogen and which is specifically not infection by a retrovirus that destroys the human immune system.

CITATIONS:
1    How “AIDS Deaths” and “HIV Infections” Vary with Age — and WHY, 15 September 2008;
2    “Disproof of HIV/AIDS theory”, Annual Meeting of the Society for Scientific Exploration, Boulder (CO), June 2008
3    “Incongruous age distributions of HIV infections and deaths from HIV disease: Where is the latent period between HIV infection and AIDS?”, Journal of American Physicians & Surgeons 13 [#3, Fall 2008] 77-81)
4    “Health, United States, 1995”
5    Centers for Disease Control and Prevention, HIV counseling and testing in publicly funded sites; for 1995, published September 1997; for 1996, published May 1998; for 1997 and 1998, published 2001.
6    Centers for Disease Control and Prevention, HIV/AIDS surveillance report; 7 #2 (1995); 8 #2 (1996); 9 #2 (1997); 10 #2 (1998).
7    Centers for Disease Control and Prevention, Current trends estimates of HIV prevalence and projected AIDS cases: Summary of a workshop, October 31-November 1, 1989. Morbidity and Mortality Weekly Report 39 (#7, 1990) 110-2, 117-9.
8    M. H. Merson, Slowing the spread of HIV: agenda for the 1990s, Science 260 (993) 1266-8.
9    M. K. Glynn & P. Rhodes, 2005. Estimated HIV prevalence in the United States at the end of 2003 (2005 National HIV Prevention Conference, Atlanta, GA, June 14) JAMA, 294 (2005) 3076-80.
10    Centers for Disease Control and Prevention. HIV counseling and testing at CDC-supported sites —United States, 1999-2004. 2006: 1-33; http://www.cdc.gov/hiv/topics/testing/reports.htm
11    The Origin, Persistence and Failings of HIV/AIDS Theory (McFarland 2007)
12    HAART saves lives — but doesn’t prolong them!?, 17 September 2008

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 114 other followers