More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death
Posted by Henry Bauer on 2008/07/12
HAART had supposedly saved at least 3 million years of life by 2003, thereby supposedly justifying the expenditure of $21 billion in 2006 from federal US government funds alone—how much more was disbursed or used by charities and other NGOs is not known. On examination, that claimed 3 million turned out to be 1.2 million: and since these are not lives but life-years, they represent the lives of perhaps 6% of AIDS victims [Antiretroviral therapy has SAVED 3 MILLION life-years, 1 July 2008;
HIV/AIDS SCAM: Have antiretroviral drugs saved 3 million life-years?, 6 July 2008]. Not so impressive after a quarter century of research costing >$100 billion.
Another more recently trumpeted claim of benefits from antiretroviral therapy is that the “excess mortality” ascribed to “HIV” has decreased substantially in the era of HAART (Bhaskaran et al. for the CASCADE collaboration, “Changes in the risk of death after HIV seroconversion compared with mortality in the general population”, JAMA 300 [2008]51-59). This article resembles the older one in its reliance on computer modeling to produce desired results; in addition, it displays astonishing ignorance of such HIV/AIDS basics as the latent period of 10 years between “infection” and illness; and it deserves a Proxmire Golden Fleece Award for discovering what was already known.
The methodology is described in laudable detail, which reminded me of the V-P who always got his requested budget because he submitted it as a computer print-out [Antiretroviral therapy has SAVED 3 MILLION life-years, 1 July 2008]; how many unqualified fools like me would rush in when Bhaskaran et al. talk of “the familiar Cox hazard ratio”, “Kaplan-Meier methods”, “Poisson-based model”, and use of Stata version 10 for the statistical analysis? Yet the weakness of the whole approach is separate from any possible technical flaws: assertions and assumptions are made that are demonstrably wrong. [Which is not to deny that specialists might well also question the applicability of any one or all of those mentioned techniques to this particular task. Specialists might also want more information than the statement that “The median duration of follow-up was 6.3 years (range, 1 day to 23.8 years), with 16 344 individuals (99%) having more than 1 month of follow-up” — what exactly does “follow-up” mean here? Were not all of these patients monitored throughout the study?]
Bhaskaran et al. ascribe to antiretroviral drugs the lower mortality in the HAART era compared to the pre-HAART era. It is at least equally plausible that this reduction in “excess mortality” was owing to the abandonment of high-dose AZT monotherapy. After all, deaths from AIDS in the United States about doubled from 1987 to 1990, and increased by more than another 50% from 1990 to 1995, dropping back then to 1987 levels (National Center for Health Statistics, Table 42, p. 236, in “Health, United States, 2007”; “HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008; http://aras.ab.ca/news.html, June 30, “Disproof of HIV/AIDS Theory”).
Bhaskaran et al. themselves admit—albeit only in by-the-way fashion in concluding comments—that their analysis is rotten at the core: “it is likely that HIV-infected individuals in our study differ from the general population in other ways”. Yes indeed! Or rather, it’s not that the studied group (HIV-positives) is “likely” to differ in multiple ways from the “control” group (HIV-negative general population), it’s a certainty that they do. On the mainstream view of HIV/AIDS, HIV-positive people have been exposed to health risks that others have not, bespeaking significant behavioral differences. On my view and that of many others, “HIV-positive” is—like a fever—an indication that the immune system has reacted against something or other, that HIV-positive people have been exposed to health challenges that HIV-negative people have not. So differences in mortality between these two groups may have nothing at all to do with “HIV”.
The gross ignorance of HIV/AIDS matters displayed in this article is illustrated by the statement, also by-the-way in the concluding comments, that “race/ethnicity are also likely to differ among HIV-infected persons”. How could these authors not know that “HIV” is found disproportionately among people of African ancestry?
Here is a further illustration of incredible ignorance of HIV/AIDS matters: “Interestingly, we found that by 2004-2006, the risk of death in the first 5 years following seroconversion was similar to that of the general population . . . further research will be needed before our finding of no excess mortality in the first 5 years of infection in 2004-2006 can be generalized beyond those diagnosed early in infection”.
Almost from the very beginning, one of the salient mysteries about the lentivirus (slow virus) HIV has been the “latent period” between presumed infection by HIV and the appearance of any symptoms of illness. That latent period is nowadays agreed to be about 10 years. Therefore there should be no excess mortality at all for an average of 10 years after infection among people not being treated with HAART, and of course for much longer if HAART staves off AIDS. Unless, of course, “HIV” is causing death in symptom-less people, so that deaths from “HIV disease” during the latent period are deaths without apparent cause. It seems unlikely that such a phenomenon would long have gone unnoticed. Here is a typical representation of the supposed progression from infection to illness and death:
The death rate shown during the putative latent period is flat and runs along the baseline.
All this makes the authors’ modest admission that “Our study has some limitations” more than a little inadequate. The many obvious deficiencies in this article, notably the ignorance of latent period, reflect unkindly not only on the authors but also on the journal, its editorial procedures, and the lack of competence or diligence of the “peer reviewers” who presumably were engaged to comment expertly on whether this deserved to be published. What on earth has happened to medical “science”? Or was it always so defective in such obvious ways?
As to Golden Fleece Awards, there is the finding that “those exposed through IDU at significantly higher risk than those exposed through sex between males”. Yes indeed, drugs are not good for you! But then it has been routine among HIV/AIDS experts to discount the risks of illegal drugs by comparison to those of “HIV”, to the extent that there are continuing campaigns to provide drug addicts with fresh, clean, needles; and occasional surprise is expressed that injecting drug users typically have health problems [COCAINE AND HEROIN AREN’T GOOD FOR YOU! — a Golden Fleece Award, 13 June 2008]. In the end, do seem to be aware of this: “It is unlikely that HIV infection is the only factor leading to increased mortality rates among those exposed through IDU” because of, among other things, “the direct risks of substance abuse”.
No less surprising (to Bhaskaran et al., that is) than the poorer health of drug addicts is the finding that older people are less able than younger people to stave off health challenges: “Older age at seroconversion was associated with a higher risk of excess mortality . . . there was a clear gradient of increasing risk of excess mortality with increasing age at seroconversion”.
In other words, the older you are when you “seroconvert”—become infected, according to mainstream views, or encounter some sort of health challenge, according to Perth-Group-type views—the more likely you are to succumb, compared to people of the same age who have not encountered the same challenge. Who would have thought it?
Yet another finding worthy of attention was that “Females were at consistently lower risk [of dying] than males”. On the one hand, even most lay people are aware that women have a greater life expectancy than men (in most countries and in all developed ones). On the other hand, might not this finding with respect specifically to “HIV-positive” have stimulated some thought among the authors, whether this means anything specifically with respect to “HIV-positive” as signifying infection by a virus?
—————————–
Here, as so often, some of what I’ve written might appear to accept that HIV is infectious and causes illness. That is not so; I am merely pointing out that even on its own terms, the HIV/AIDS view would still be wrong about the claimed benefits of antiretroviral drugs: there is no evidence that they prolong life. At best, as Dr. Juliane Sacher has pointed out, they might bring a temporary benefit by acting as antibiotics, for they certainly are inimical to life.
—————————–
ACKNOWLEDGMENT: I am grateful to Fulano de Tal (a commonly used pseudonym, compare “John Doe”) who pointed out that an earlier version of this post included speculations based on US data that are irrelevant here since the CASCADE study includes only European cohorts. I also added the graph in response to one of “Tal”‘s comments, because I was not able to put the graph into my response.
This entry was posted on 2008/07/12 at 4:19 pm and is filed under antiretroviral drugs, experts, Funds for HIV/AIDS, HIV absurdities, HIV and race, HIV as stress, HIV does not cause AIDS, HIV varies with age, HIV/AIDS numbers, M/F ratios. Tagged: Anne M. Johnson, CASCADE Collaboration, computer model, false assumptions, Faroudy Boufassa, Fulano de Tal, invalid controls, Juliane Sacher, Kholoud Porter, Krishnan Bhaskaran, latent period, Mette Sannes, Osamah Hamouda, Paul C. Lambert. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
Fulano de Tal said
Thanks for the quick response! Sorry, but if I understand you correctly, I think I can help you shorten the post even further. You say “That latent period is nowadays agreed to be about 10 years. Therefore there should be no excess mortality at all for an average of 10 years after infection among people not being treated with HAART…” The 10-year estimate is the median period of progression to aids, meaning that 50% of those infected, according to the orthodoxy, have AIDS within 10 years. Pre-1990s estimates are that somewhere around 10% of those infected will die within 5 years. So, while eliminating excess deaths at 5 years is a relatively low hurdle, it is not correct to say that the latent period theory predicts no excess mortality in 10 or even 5 years among the untreated.
Also, the “constant hazard” assumed by the study is not for the entire post-infection period. The hazard is estimated separately for each year. It is assumed to be constant within years. I don’t know whether or not the authors were aware of the latent period, but it was not written into their model.
Henry Bauer said
Yes , “FdeT”:
The chief point is this: The latent period was not included in the model, so any output from the model is not in keeping with the present version of HIV/AIDS theory. Therefore the conclusion that excess mortality has decreased is not valid.
The latent period is an integral and inescapable part of HIV/AIDS theory, because it has long been so obvious that so many HIV-positive people are not ill. Indeed, the Centers for Disease Control and Prevention keeps asserting that about ¼ of all HIV-positives don’t even know their status.
My statement, “there should be no excess mortality at all for an average of 10 years after infection”, is of course a simplification. More accurately, one might say there should be little if any excess mortality; or negligible excess mortality. But that would require a considerable lengthening, not shortening of the discussion, to explain the basis for “little if any” or “negligible”. To do so, I would use the graph that is now in my post, because I couldn’t find a way to place it in this response. This standard interpretation of the course of “HIV disease” shows quite sharp “breaks” in the curves for viremia (or viral load) and for deaths toward the beginning and end of the latent period, and a slow decline in CD4 counts during that period, in other words, during the latent period the proportion of HIV-positives displaying any symptoms of illness is very low and so is the proportion of people actually dying during the latent period; recall that the latent period is from infection to symptoms, not from infection to death.
The sharpness of these breaks indicates that the distribution of signs of illness around the putative average end of the latent period is unlikely to be a normal distribution; so your suggestion that 50% of deaths would come within 10 years is incorrect. In any case, the average time of death is supposed to be several years later than the average end of the latent period. The estimate, pre-1990s, that “somewhere around 10% of those infected will die within 5 years”, is one I would appreciate a reference for; but in any case, it raises another complication, namely, the difference in time between getting “infected” and the time when “infection” is diagnosed. In the early AIDS era, of course, many people were HIV-tested when they were already ill, and testing only slowly began to include more and more others, in high-risk groups or outside those chiefly among blood donors and military personnel. So dying within 5 years or so of DIAGNOSIS OF INFECTION corresponds to dying (5 + some unknown number of years) after infection.
One of the other Achilles’ heels of HIV/AIDS theory and practice is the estimation of time of infection. That alone deserves several posts, for which I’ve been trying to summarize the copious data for quite some time. The salient difficulty is that becoming infected is often said to occur without symptoms, and about equally often is said to be marked by mild flu-like symptoms or possibly a rash. Very few people have actually been HIV-tested from the time they were HIV-negative with sufficient frequency that the actual time of seroconversion could be estimated with some guarantee of reliability. In most cases, HIV-positives are asked whether, during the prior 3 months or so, or even further back, they have ever experienced mild flu-like symptoms, a rash, slight fever. How many of us could NOT recall such an episode within a few months? Thus all the inferences based on “length of infection” are little if anything more than guesswork.
I hope my responses within the post and in this comment illustrate my gratitude for your interest. As I’ve written on many occasions in my discussions of unorthodoxies in science, those who hold heterodox views are greatly hindered by the fact that they cannot benefit from the give-and-take that peer reviewing provides (at least ideally). As a result, contrarian discourse tends to be somewhat raw, not improved by constructive criticism. In the case of HIV/AIDS, mainstream publications have for a long time simply rejected out-of-hand, without substantive critique, submissions from rethinkers like myself. Some of that is documented in my book. My experience since then includes half-a-dozen data-filled letters to Nature and Lancet , written jointly by the eminent epidemiologist Gordon Stewart and me, which were all rejected by return mail without any substantive reason being offered, even in response to later requests and protests.
So I truly appreciate your help in tightening up the present argument. The claim of decreased excess mortality in the HAART era in the CASCADE study is baseless because the model employed is not valid. Little if any excess mortality would in any case be expected within 5 to 10 years of “infection” in the absence of treatment.
But then, too, HIV/AIDS theory is demonstrably wrong because in actual fact there is no latent period, as the comparison of HIV diagnoses and HIV deaths shows—see Table E in “HIV DISEASE” IS NOT AN ILLNESS, 19 March 2008,
https://hivskeptic.wordpress.com/2008/03/19/“hiv-disease”-is-not-an-illness/, and “Straightforward and obvious disproof of HIV/AIDS theory”, my presentation at the 27th Annual Meeting, Society for Scientific Exploration, Powerpoint posted at http://aras.ab.ca/index.php under “News” at June 30.
Fulano de Tal said
I am very impressed by your awareness of the importance of review and cross-checking for scientific process. In that spirit, let me try to clarify my last post. I’m afraid I may have given the wrong impression in the last sentence. I meant to say that ignorance of latency was not written into the model used by Bhaskaran, et al. Let me explain.
The Cox model used in the paper is non-parametric (actually it is more accurately called semi-parametric). This means that the model does not constrain the survival curve to any particular shape. It does not assume that the hazard of dying is a mathematical function of time from seroconversion, but instead estimates the hazard separately for each year after seroconversion. This means that there is no assumption of latency or absence of latency. Whether or not the authors knew about this theory, they would still set up their model the same way. If there is indeed a latency period in the data, it will show up in their results. If not, it won’t.
If the authors were unaware of the latency period before doing this study (which is doubtful at least for the authors who are clinicians and would be very familiar with the concept), they are certainly aware of it after their analysis. It is very evident in the figure on page 56. The theory is that (untreated) 50% of those with HIV develop AIDS within 10 years of seroconversion, and die 1-2 years thereafter. The figure shows something very close to this, with this group progressing perhaps slightly quicker than in the standard model. For the younger group, pre 1996, about 50% are dead in 11-12 years. For the older group the half-life is somewhere between 9 and 10 years.
Even with the latency period manifesting so clearly in this analysis, the 5-year mortality pre-1996 is by no means negligible. If you look at table 4, on p. 57, where the results are broken down into 4 age groups, the percent dead at 5 years, before 1996, ranges from 8.3 to 23. Almost all of this is in excess of the expected mortality in the non-HIV population (adjusted for age, sex, and country). So if 5-year mortality after 1996 is brought closer to the level in the non-HIV population, this is indeed an improvement over the pre-1996 baseline. It fully takes the latency period into account, not because the authors have “assumed” latency, but because it is in the data.
So your “chief point” at the top of your last post is based on a misunderstanding of the methods used in this study. There may be reasons to question the results of this study, but failure to include the latency period cannot be one of them. It is unfortunate that those who, rightfully, refuse to take the study’s conclusions on faith are not always equipped to deconstruct the methods used.
By the way, the figure that you added to your post does not really have a bearing on this discussion. It presents the typical progression of HIV/AIDS, according to the dominant model, but does not show the variation in this progression. There is no death rate on the figure. The label ‘Death’ applies to only one point, representing the stage of the disease when death typically takes place.
I am very hurt that you would think I am using a pseudonym. That has always been a sensitive issue with us de Tals.
Regards.
Darin Brown said
Fulano,
I haven’t read the Bhaskaran paper, but I believe your comments on the nature of the statistical analysis are correct. An assumption of latency (or ignorance of latency) would not seem to be required for this sort of analysis. If I understand correctly, they are simply measuring changes in excess mortality over time. I believe you are correct in stating that a median “latency period” of 10 years does not preclude excess mortality in periods of less than 10 years. I believe you are also correct in your comments about the figure Bauer chose to display: the line plots represent the alleged progression of disease in a single hypothetical patient and do not represent collective rates of progression among a cohort.
Don’t think I’m letting you off the hook, though.
Even if the methodology and results of the paper are completely sound, (and I wouldn’t be surprised if they are indeed sound), the conclusions do not follow, for reasons with which you should be familiar if you have been paying attention to this blog for any length of time.
Just because you can measure a “mortality gap” between those who test positive on HIV antibody tests and those who test negative, it does not follow that one can conclude there exists a so-called “latency period”, any more than noting that people who have high fevers have “morbidity and mortality gaps” compared to those who have normal fevers would allow one to conclude that one has observed a “latency period” due to exposure to a “fever pathogen”.
To be quite blunt, you are committing the post hoc fallacy: We observed A, then later we observed B; therefore A is the cause of B. Moreover, all your conclusions are based on deeply embedded assumptions, namely:
1. The HIV tests are detecting infection with a well-defined exogenous virus.
2. There is such a thing as “progression to AIDS”, whatever that means.
3. When someone tests positive on an HIV test, and becomes symptomatic years later, this is proof they were infected with a well-defined exogenous virus which exhibited a “clinical latency period”.
Such assumptions could equally be applied to high fevers:
1. High fever readings indicate infection with an exogenous “fever virus”.
2. There is such a thing as “progression to “acquired fever syndrome”.
3. When someone gets a high fever, and becomes symptomatic years later, this is proof they were infected with an exogenous “fever virus” which exhibited a “clinical latency period”.
Sounds silly, doesn’t it? But that’s the kind of thought pattern your stuck in with regard to HIV.
Henry Bauer said
Darin:
Thanks for saving me (or trying to!) from myself, and especially for bringing the discussion back to the main point of the original post, GIGO: start with wrong presumptions, and the outcome will also be wrong, no matter how technically correct protocols, procedures, calculations, or models might be employed to get from input to output. As I said, “the weakness of the whole approach is separate from any possible technical flaws: assertions and assumptions are made that are demonstrably wrong .
Excess mortality cannot be gauged by comparison with the general population. That assumes that everyone is at equal risk of becoming HIV-positive in the first place, and as the authors admit only at the end, HIV-positive people differ from the general population in other ways than being HIV-positive.
The estimate of the length of the latent period for untreated HIV-positives grew from 6 months or a year at the beginning of the AIDS era to the present 10 years. HAART was supposed to extend this latent period significantly. That there appeared to be no excess mortality within 5 years should hardly be surprising, then.
As earlier remarked, decreased mortality after 1996 in people being treated with antiretrovirals can equally be explained by the lesser toxicity of HAART compared to monotherapy. That seems particularly plausible since that lesser but still significant toxicity of HAART was noted within a few years of its introduction—albeit some attempts have been made to mask this toxicity by inventing the new condition of “immune restoration syndrome”, where improvement in the lab criteria for immunological recovery and decreased viral load is accompanied by worsening of the patient’s condition. The January 2008 treatment guidelines acknowledge that non-AIDS “events” are now more common than “AIDS” events among HAART-treated individuals.
The proper way to test any treatment is with matched groups of the treated and the untreated. Such a trial with HIV-positive individuals would have the additional benefit of indicating how many HIV-positives are potential “elite controllers” or “long-term non-progressors”, about which we have currently only anecdotal information. Nor would it be unethical to leave some HIV-positive people untreated, one could seek volunteers from among those who had already decided for themselves to avoid antiretroviral drugs, hundreds of whom could be found through a variety of existing support groups.
8)
“FdeT”: I would sympathize more with the de Tals for being taken as pseudonymous if the available remedies were not so obvious and readily to hand. Avoid naming scions “Fulano”, for example. And any such who is distressed thereby should not find it difficult to tender proof of identity, after all. 8)
Steve said
(I’m tempted to write as John Doe, just to confuse the naming issue further, but I won’t!)
Prof. Bauer,
I think that the answer to one of your (possibly rhetorical) questions in your article, namely what has happened to medical science, or was it always this bad, is this: it was always this bad!
For instance, vaccine “science” is hundreds of years older than AIDS “science”, but it isn’t any higher quality.
Dental fluoride “science” is merely decades older, but it isn’t of any higher quality than AIDS “science” either.
General nutrition science, such as the relationships between dietary fat, cholesterol, carbohydrates, salt, and health, was until very recently also exceedingly poor. It’s starting to get a bit better now, I think, but very slowly.
I’m fairly sure that the bottom line is that when there’s a lot of money to be made, the quality of the science goes right out the window, and it stays there too. This shouldn’t really surprise anyone. I would say that AIDS is unfortunately not exceptionally bad science; it is run-of-the-mill bad science which is exceptionally profitable and apparently exceptionally deadly.
Lucas said
I’m quite confused. I understand Henry had some criticisms of the paper, ONE of which was about not considering latency. (I don’t understand this criticism at this point.) Then “Tal” said this one criticism is unfounded. Henry didn’t agree, and put in a figure. Then Tal upheld his criticism of Henry’s “latency consideration” criticism, and remarked the figure is not relevant, for a reason I don’t understand too. Then Darin said that he agrees that figure is irrelevant (again, little explanation), that he agrees Henry’s “latency consideration” criticism is unfounded, and then proceeded to argue about relevance of discussing latency. Then Henry again added more on latency, unfortunately I’m still at loss whether it’s a general remark or is it about the paper, or the figure?
I would just suck it up, but I’m also a bit worried because of some presumably missing comments about “speculation” regarding some CASCADE data (the speculation removed too?). The discussion of the latency issue in the post and in the following comments is for me quite unclear, a sad 😉 fact with which I wouldn’t normally bother anybody but myself.
Martin said
That was an excellent analogy per Darin Brown (Szaszian I would say) for those who don’t understand the concept of fevers and viruses; the study pointed out by Dr. Brown is akin to building an excellently executed eleven-hour watch.
Henry Bauer said
Steve:
Your comment raises quite large issues. My cri de despairing coeur was stimulated by the poor quality of so much HIV/AIDS science within the bounds of what’s already well established as good practice . I don’t know enough about vaccine, dental fluoride, or nutrition matters to judge—and you didn’t specify—whether the “bad” refers more to the state of knowledge or to inferior approaches to gathering the knowledge.
I agree wholeheartedly, though, that money-related matters exert a corrupting influence, particularly since so much research nowadays requires large investments in human effort as well as infrastructure. That’s why there are now, in basic or fundamental research, knowledge monopolies and research cartels, see http://henryhbauer.homestead.com/21stCenturyScience.pdf
Darin Brown said
Lucas,
My reason for dismissing the relevance of the Fauci representation with respect to early excess mortality is that it alleges to represent individual biological parameters for a single hypothetical patient, not epidemiological numbers like death rates. The “flat” curve running along the “baseline” in the Fauci graph actually represents viral culture titre, not death rate.
As to my comments on “latency”, I was not commenting on the “relevance” of latency, but something more fundamental: the epistemological issues surrounding usage of the term “latency” itself.
My point was that we have to be very careful about language: WHAT language, WHOSE language. Terms carry theoretical baggage. For example, the term “false positive” implies that a gold standard exists. Without a gold standard, there is no such thing as a “false positive test result”. Similarly, without a gold standard, there is no such thing as a “non-specific test”. By simply using these terms, we make the implicit assumption that a gold standard exists.
The term “latency period” connotes more than simply a duration of time. It is implicit in the use of the term “latency period” that an infection has occurred, and that such infection has subsequently caused signs and symptoms of disease. By simply using the term “latency period”, one makes such conclusions.
Of course, one may use such terminology in a “devil’s advocate” sense, showing that orthodox interpretations of terminology ultimately lead to contradictions or absurdities. One may also use such terminology much as Dr. Bauer uses the term “HIV prevalence” in his book and articles, as a convenient shorthand for a more precise but necessarily clumsier phrasology. But without some kind of prefacing or signalling by the author, these types of usages can frequently be misconstrued by the reader.
Henry Bauer said
Lucas:
I regret and apologize for my part in causing confusion. I trust Darin’s response clarifies re the figure and the matter of latency.
As stated in my acknowledgment, I had speculated about reasons for decreased excess mortality on the basis of US data and AIDS definition and removed that when “FdeT” pointed out their irrelevance. Since they are irrelevant, I leave it at that!
Regarding the figure, I was trying to say that since it shows rather sharp breaks of curvature toward the beginning and end of the clinically asymptomatic period, it seems to me unlikely that the distributions of those time-periods in groups of individuals would be “normal” distributions, unlikely that the onset of the biological mechanisms responsible for those rapid changes would correspond to a “normal” distribution, and certainly not one with as broad a spread as “FdeT” asserted.
I was wrong to talk about the latent period in connection with the initial calculations used, as “FdeT” and Darin both pointed out. It remains relevant to whether the claimed decrease in excess mortality, especially within 5 years, is any evidence at all of benefit from HAART.
I hope this helps.
MacDonald said
Gentlemen: since repetition is no sin in this company, allow me to sum up:
While we surely cannot accept the authority of Fauci’s hypothetical snapshot of the mortality of a supposedly representative HIV patient, there is no reason for the orthodoxy to be impressed with itself for managing to keep people alive a measly five years.
Even on its own terms — which don’t include changing definitions of AIDS, slightly less toxic drugs etc., in order to attribute to HAART whatever progress the study authors may have found, one would have to assume a 27-years flat learning-curve with regard to all other health and disease management factors, including better diagnoses, better complementary drug treatments, nutrition, general health and exercise, and human-to-human care and counseling.
https://hivskeptic.wordpress.com/2008/07/06/hivaids-scam-have-antiretroviral-drugs-saved-3-million-life-years/#comment-1045
However, I am not sure our unfortunately named interlocutor is guilty of any ad hoc fallacy, as charged by Darin, since he does not appear to be arguing that HAART necessarily is the cause of the improvement or “HIV” necessarily the cause of AIDS.
It is REVEALING, however, that, despite the serious weaknesses admitted to by the authors themselves, this tentative five-years-no-increased-mortality finding is being touted (as predicted by yours truly, https://hivskeptic.wordpress.com/2008/06/11/smart-study-begets-more-cognitive-dissonance/#comment-988) as some kind of medical miracle:
“Thanks to improving anti-HIV treatment, people with HIV, in the first five years after diagnosis with HIV, now have mortality rates similar to those seen in the general population”
http://www.aidstruth.org/new/node/80
And THAT hook, gentlemen, no amount of pseudonymity will get you off.
MacDonald said
I have a horrible penchant for dreaming up silly headlines, I know, but please indulge me:
VIRTUAL BREAKTHROUGH: THE SPEED OF HIV MEASURED.
Or in the equally silly words of the mini-article explaining the science to us layfolk:
“Novel Computational Model Describes The Speed At Which HIV Escapes The Immune Response”.
In genuine science-speak (funding-application language), it looks like this:
“Dynamics of Immune Escape during HIV/SIV Infection.”
The mini-article states: “Depending on the diversity of the immune system, the virus will either be controlled effectively or accumulate detrimental mutations.” The escape rate, in other words, varies dependant on all kinds of variables, which makes it no mean feat by any standard to “describe the speed” of HIV.
So what does this description of HIV-speed look like translated from the language of mathematics? In what could be the winning bid for Prof. Bauer’s Golden Fleece Award, the following sensational information is offered:
“(The authors) illustrate that the virus often evades the immune response very slowly, on a timescale of years.”
Perhaps this is what was earlier referred to as the “latent period”, who knows? But the real significance of the model, according to the researchers, is that it may help us devise strategies for vaccine development. That would indeed be a boon, since Dr. Fauci himself has just cancelled all human HIV-vaccine trials. But how does the computational model arrive at its suggestions for novel vaccine-strategies? The answer brings us to the core Mystery of HIV-science:
“it remains difficult to fully understand the dynamics of immune escape, as data from infected patients is relatively sparse. Knowing this, Drs. Christian Althaus and Rob De Boer performed computer simulations to help interpret longitudinal data derived from HIV-infected patients”.
The degreed computer nerds took “sparse” data and fed it into a model based on their favourite assumptions about interactions of the immune system and HIV:
“Several studies have shown that cytotoxic T lymphocytes (CTLs) play an important role in controlling HIV/SIV infection. Notably, the observation of escape mutants suggests a selective pressure induced by the CTL response. However, it remains difficult to assess the definite role of the cellular immune response. We devise a computational model of HIV/SIV infection having a broad cellular immune response targeting different viral epitopes.”
According to the authors, this model of HIV/SIV (why not lump the two different viruses, infecting and affecting entirely different species together?), based on “a broad cellular immune response targeting different viral epitopes”, will now become a powerful tool in devising strategies for vaccine development.
But what was it exactly that was wrong with HIV-vaccine research in the first place? Here’s Anthony Fauci, via Lawrence K. Altman and the New York Times:
“Dr. Fauci said he reached his decision to cancel the coming trial after meeting with scientists to try to understand why the Merck vaccine had failed. He said he had concluded that scientists must go a step at a time because they did not yet know fundamental facts like which immune reactions are the most important in preventing the infection.”
These “still unknown fundamental facts” are exactly the ones that the researchers have now fed into the new computer model designed to help devise strategies for the fiasco-haunted HIV-vaccine research — whose only accomplishment has been to show us that the fundamental facts we feed into our models are unknown… In short, another perfect scientific tail-chase brought to you courtesy of lavish taxpayer-driven AIDS-funding.
But we cannot deny the creators of this computer model their claim that they have so moved from unknown to known and advanced our knowledge;so perhaps the headline should be:
METAPHYSICAL BREAKTHROUGH: THE SPEED OF HIV MEASURED
http://www.sciencedaily.com/releases/2008/07/080717201840.htm
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000103
http://topics.nytimes.com/top/reference/timestopics/people/a/lawrence_k_altman/index.html?inline=nyt-per
CathyVM said
The Inscrutable Mootatable Virus
O thus we toil o’er germ
Unknowing, unseeing
Blind we were, as Keller
But not mute
Glutted, 200,000 worthy scribbles
Most studious germ art thee
And yet still, we strive to know you
Not yet astute
Your essence doth evade us
Forsooth, we valiantly fought but failed
Ne’ertheless enriched endowment
Adrift in a mire of loot
Still, we eschew the cowl of shame
Lucifer remains unpaid
We prevail until the bovines’ homecoming
Beneficence forsook
We know not how the germ doth wound
In the corporeal sense
We default now to cauldron
Toe of toad, eye of newt
Double, double, toyle and trouble
Fire burn and cauldron bubble
Cool it with the baboon’s blood
Our fabrication we’ll compute
Marcel said
Reply to Bauer statement: “Little if any excess mortality would in any case be expected within 5 to 10 years of ‘infection’ in the absence of treatment.”
Excellent stuff, Henry. But aren’t you ignoring excess mortality that is caused, not by a virus, not by the drugs, but by the psychosocial turmoil and trauma caused in a person by the poz diagnosis? This might include suicide, but also the appearance of symptoms simply due to belief in the doctors and the disease. Believing that you will become sick is fully capable of producing symptoms. I recall there are studies that have demonstrated that; see Matt Irwin’s “Aids and the Voodoo Hex,” which I hope is still on the web, for some examples.
Also, someone who is diagnosed poz might take up unhealthy habits just to escape from their overwhelming despair. Alcoholism, recreational drugs…they might even deliberately live dangerously by driving a motorbike or something, simply because they believe they have nothing to lose. Depression and social isolation alone, produced by the diagnosis, can cause death, can’t they? Not to mention being rejected by your family and cast out of your home, or being fired, or beaten by your husband, as happens in South Asia to diagnosed people. People can actually die of hunger because the diagnosis causes them to lose the financial means to survival, not to mention loss of emotional support from friends and family who reject them, and loss of all hope for the future.
Now, I’m not sure how this fits with the subject at hand. If there is really no excess mortality in the first 10 years after infection, then I guess psychosocial factors aren’t an issue. I suspect, though, that they ARE an issue in places like SE Asia, where the diagnosis is received a hell of a lot differently than it is received by self-destructive, hypochondriac San Francisco gay sophisticates (“Oh darn, I’m poz! Well, where shall we go for dinner tonight?”). I’d be interested to know whether people do die at an increased rate during the latency decade, and if so, what they died from. And to see this info categorized by location and cultural milieu.
AidsTruth: “Thanks to improving anti-HIV treatment, people with HIV, in the first five years after diagnosis with HIV, now have mortality rates similar to those seen in the general population”
Which “people with HIV” is Aidstruth talking about? I would bet that mortality would be much greater than normal in societies where “Hiv” is a stigma that gets you ostracized to the point of being an untouchable, such as SE Asia and India, and that mortality would be closer to normal in societies where most of the diagnosed are gays who often welcome the diagnosis, as it’s such an important part of gay identity. This would probably be true with or without ARVs. (another interesting question is, are the standards for initiation of ARV therapy different in San Francisco and SE Asia? I believe I have seen info that indicates that ARVs are administered in SEA way earlier than they are in the US, where ARVs are supposed to be delayed until T-cell counts reach a certain low level. As with Hiv testing, wherein the standards for calling someone positive are much less rigorous in places like SEA than they are in the US, perhaps the standards for initiating the life-saving ARVs differ from place to place as well)
So, my gut feeling is that something must be wrong here, and that people who are not gay sophisticates have to by dying at greater than normal rates during the latency period, depending on the variables I mentioned. But I haven’t the means nor ability to crunch the data to see if I’m wrong or not.
Henry Bauer said
Marcel:
Very good points indeed, thank you.
I didn’t know Irwin’s piece, but have been realizing increasingly how powerfully depressing being pronounced HIV+ can be, with definite consequences for health. I have a little about it (nocebo effect) in my book, but I’ve heard personal testimonies since then that make the point much more forcefully. So, you’re right, one should not be surprised at excess mortality among HIV+ people from the moment of diagnosis, for reasons of nocebo if not physical HIV/AIDS.
I’m not inclined to pay much respect to that particular study, but if there is indeed no measurable excess mortality, perhaps the nocebo effect has been on average not drastic enough to bring about actual death (which is not synonymous with decreased quality of life), even while some individuals may have reacted as you describe and died as a result.
As for crunching the data, part of the trouble (as usual with HIV/AIDS) is that any number of variables one would like to know about are not mentioned.
The AIDStruth comment is just citing that study that I was criticizing, which has plenty wrong with it.
MacDonald said
Marcel,
It is 27 years hence!! The nocebo effect is nowhere near as strong any more as it was in the ’80s, when people were often diagnosed with real clinical symptoms as well.
When the good doc. tells you you’re going to survive another decade or two because of the drugs, those susceptible will believe that as much as they will believe an immediate death sentence.
Marcel said
But, McDonald, these people have also seen friends and associates waste away and die while on the meds, despite the doc’s promise of a long life. So you can’t say that nocebo has lost all its power.
Also, being told you might live another 10 or 20 years because of the arvs is scant comfort when a person is 20 or 25 and had hoped, before the poz result, to find a loving spouse, have a family, and all that jazz, and live happily ever after. Surviving longer just prolongs their misery and loneliness as an outcast of society who can’t even have sex without being prosecuted for attempted murder. And it doesn’t do anything for depression, which can cause its own physical decline.
Those buffalo humps don’t do much for the arv-saved person’s self-esteem either. Imagine being a hunchback for 10 to 20 years before you die. Was Quasimodo happy?
Remember, we are comparing two utterly different groups of victims here. San Francisco or New York gays who subconsciously want to die and for whom the diagnosis gets them tons of emotional support, meals on wheels, etc. from fag hags and aids charities; and poor people from SE Asia or Africa who really want to live, want to have a family, and get zero emotional support, only condemnation and rejection from their unsophisticated families and societies. For them, emotionally, the diagnosis is like getting hit by a truck driven by a drunken Bob Gallo.
Henry Bauer said
MacDonald, Marcel:
Talking about depression following an HIV+ diagnosis, see http://news.bbc.co.uk/2/hi/uk_news/england/manchester/7532627.stm
“Michael Ashton told an employment tribunal in Manchester he had to take time off work because of his condition. . . He informed his department boss of his condition on 3 August 2006 — the day after he was diagnosed — and went on sick leave until 11 August. . . ‘I was struggling to come to terms with my diagnosis and I felt I was being treated unfairly. . . . There was a large amount of stress going on . . . ‘”
MacDonald said
Yes, yes, Marcel. But the Cascade report was not based on Cambodian ladies of the night. It’s flawed through and through anyway.
Around here a 20-year-old HIV+’s life expectancy has just increased from 23 to 43 years in ONE YEAR!!
http://www.abc.net.au/news/stories/2008/07/25/2315032.htm
That’s a significantly greater increase than the one from AZT monotherapy to inhibitor combos.
http://www.ncbi.nlm.nih.gov/pubmed/17413689
Those truly are wonder drugs.