HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘I J Good’

Elsevier-Gate

Posted by Henry Bauer on 2010/03/20

The disgrace and fall of President Nixon had begun with an ill-advised, pointless little burglary at the Watergate, a group of office buildings. The attempts to COVER UP the burglars’ connections to Nixon’s staff involved so many lies and eventually embroiled so many people that Nixon could no longer govern, all credibility gone despite his astonishing TV assertion that “I am not a crook!”

The media have since that time appropriated the suffix “Gate” as shorthand for any scandal about an attempted, stupid, fumbled cover-up that threatens to bring down some house of cards that earlier had seemed impregnable. So when e-mails were discovered at the University of East Anglia’s Climate Research Unit showing that the climate gurus had been conspiring to suppress data contradicting their theories, the event was naturally enough publicized as “Climate-Gate”. The Director of that Climate Research Unit soon resigned, and an “independent” international panel was formed to consider the soundness or otherwise of what had been promulgated for many years by the IPCC — International Panel on Climate Change. It remains to be seen, of course, whether that “independent” panel will be genuinely independent enough to include such highly qualified dissenters from the mainstream dogma as Patrick Michaels, former climatologist for the Commonwealth of Virginia, or physicist Fred Singer, emeritus professor from the University of Virginia and from George Mason University.

Now comes Elsevier-Gate:

In August of 2009, HIV/AIDS vigilantes persuaded Nobelist Barre-Sinoussi to allege to Elsevier that questioning HIV as cause of AIDS represents a potential threat to global public health; which terrified Elsevier’s Vice-President Glen Campbell sufficiently that he had two articles already accepted by Medical Hypotheses, already posted on-line as in press, withdrawn — without bothering to inquire into the plausibility of Barre-Sinoussi’s assertion by, say, consulting the journal’s editor, or its editorial board, let alone the authors of the articles. Perhaps he was terrified less by the medical or scientific substance of the assertion than the threat to boycott Elsevier journals, and to have the National Library of Medicine cease abstracting Medical Hypotheses in PubMed.

The vigilantes had in fact also petitioned the National Library of Medicine to that effect, a petition that was carefully considered and then rejected — even though Medical Hypotheses had over the years published at least a couple of dozen articles questioning HIV/AIDS theory.

Someone at Elsevier must have realized that Campbell’s precipitate action was a blunder, so another V-P, Chris Lloyd, was given the task of fixing the mess. Lloyd’s actions, however, have been just as discourteous, ill-advised, and inept as those of Campbell — or those that led to the fall of Nixon. Lloyd set up “a panel” to look into, not the withdrawal of the articles, but the fact that Medical Hypotheses did not normally use peer review — which had been the chief reason why the journal had been founded in the first place! The whole point was to provide a forum where ideas that mainstream reviewers would not find publishable could be shared with the medical-scientific community, the nature of the ideas being described plainly enough by the journal’s title of HYPOTHESES. This was as plain when Elsevier took over the journal as when the journal had been founded by the distinguished biochemist David Horrobin, so it is crystal clear that this paneling was intended to cover up something (Campbell’s blunder) rather than to produce pearls of wisdom. To make the conspiracy even plainer, Lloyd kept the membership of the panel and its precise terms of reference secret. Unremarkably, the panel delivered the opinion — or so Lloyd said* — that some group of qualified people should consider whether peer review should be made a regular part of the operations of Medical Hypotheses — which had been founded precisely so as NOT to be constrained by the conservatism of peer review. One wonders whether the panel knew of that history and its rationale. The group of allegedly qualified people that Lloyd then enlisted were drawn from the staff of another Elsevier publication, and their identities were again kept secret. Unremarkably, they recommended — or so Lloyd claimed* — that peer review be instituted — in other words, that Medical Hypotheses no longer be Medical Hypotheses and become just another journal disseminating the current mainstream consensus.**

In the meantime, one of the authors of one of the withdrawn articles had sued Elsevier in a Dutch court, since the publicly posted description of reasons for the withdrawal of the already accepted articles represents a libelous statement. Suddenly Lloyd was able to produce unsigned “reviews” of the articles in question by 5 anonymous reviewers, unremarkably enough finding the articles unsuitable for publication — albeit for other reasons than that they constituted a threat to global public health or were potentially libelous, which were the originally stated reasons for withdrawal. In other words, even these “reviews” found that the withdrawal had not been justified on its own terms. Internal evidence in those “reviews” demonstrates how hastily they were composed with the single purpose of justifying withdrawal of the articles: there are not only typos signifying unseemly haste but also ad hominem remarks that should have no place in scientific discourse, and the “reviews” fail to address substantively the actual points made in the articles. Most particularly the reviews failed to address the fact that the Duesberg article presented evidence, data from mainstream sources, that claims of 300,000 unnecessary AIDS deaths in South Africa were based on computer modeling in which the number of South African AIDS deaths was said to be about 25 times greater than the numbers for AIDS deaths published by the official South African Statistics agency.

From the viewpoint of AIDS Rethinkers and HIV Skeptics, it is encouraging that Elsevier-Gate is beginning to attract wider attention:

— In January, at the Times Higher Education website, innumerable comments from people not previously engaged in HIV/AIDS matters spoke to the value of a journal like Medical Hypotheses that circumvents the traditional censorship of genuine novelties that is inevitably imposed by peer review: Zoë Corbyn, “Unclear outlook for radical journal as HIV/Aids deniers evoke outrage”, 14 January 2010; “Publisher attempts to rein in radical medical journal — Editor rejects proposal to have submissions peer reviewed”, 23 January 2010; “Implement peer review or resign, controversial journal’s editor told — Ultimatum spells end for Medical Hypotheses in its current form”, 10 March 2010.

— Now Nature’s website has also described the situation, giving us the opportunity to make public some of the details, like those mentioned above, that Elsevier has failed to disclose to enquiring journalists: Daniel Cressey, “Editor says no to peer review for controversial journal — Move demanded by publisher would ‘utterly destroy’ Medical Hypotheses”, 18 March 2010.

I posted the first comment there after having learned about the piece from Marco Ruggiero, who also promptly posted a comment. Several individuals not previously engaged in HIV/AIDS matters have added their views on peer review, interspersed by some inevitable know-nothing cries from an HIV/AIDS groupie; who will find, as J P Moore and others did at the Times discussions, that when a wider audience participates in these exchanges, the HIV/AIDS vigilantes find themselves clearly out-argued and out-numbered. A wide swath of non-scientists as well scientists understands that the way to discredit bad or false science is to point out in what way it is bad or false. That’s what the supporters of HIV/AIDS dogma cannot do, because it is their own science that is bad and false. After more than a quarter of a century of intensive, well-funded research, they cannot answer these fundamental questions:

1. When exactly was it proved that HIV cause AIDS?

2. What are the scientific publications that constitute this proof?

3. By what mechanism does HIV destroy the immune system?

__________________________
FOOTNOTES:
* A former Department Head in a certain Chemistry Department — in days when Heads were dictators and not chairpersons — was wont to chair meetings of the various Departmental committees. At subsequent business meetings of the whole faculty, he would then announce, “Committee A has met; and it has been decided that…”. All perfectly true, even though the decision would always be his alone and irrespective of what the committee members might have advised.
** Should anyone doubt the value of publishing hypotheses, they might ask themselves why distinguished people would find it valuable, for instance the contributors to The Scientist Speculates (ed. I. J. Good, Basic Books, 1963) who include not only Good himself, internationally renowned for reviving Bayesian statistics, but also (for further example) J. D. Bernal, David Bohm, Sir Cyril Burt, Arthur C. Clarke, Dennis Gabor, Arthur Koestler, L. S. Penrose, N. W. Pirie, Michael Polanyi, Harlow Shapley, R. H. Thouless, C. H. Waddington, Eugene Wigner, and more. The collection’s epigraph is “The intention of this anthology is to raise more questions than it answers”, in view of what everyone who really understands science knows, that the most important spur to progress is to ask the right questions. That was the value of Medical Hypotheses. It could point out that certain mainstream Emperors have no clothes, something that could never pass “peer review” no matter how obviously true it might be.

Posted in HIV absurdities, HIV does not cause AIDS, HIV skepticism, Legal aspects, prejudice, uncritical media | Tagged: , , , , , | 22 Comments »

Statistics can lie, but Jack Good never did — a personal essay in tribute to I J Good, 1916-2009

Posted by Henry Bauer on 2009/04/12

A few years ago, as I was looking into HIV/AIDS data, I came across a claim that certain sets of “HIV” and “AIDS” numbers were correlated. The claim was presented by an image with shadings for the respective numbers, and the shadings do look similar. However, using the actual numbers that were also in the image, I calculated each ratio of “HIV” to “AIDS” and found that the ratios looked more like a random set than like a constant. So I stuck the numbers into an EXCEL worksheet and used the inbuilt CORREL function to derive the correlation coefficient. Lo and behold, it came out as 0.88, which represents — or should represent — a very respectable degree of correlation. Despite that, the set of ratios doesn’t look like “HIV” is correlated with “AIDS”.

For some three decades, I had the privilege and pleasure of regular visits with Jack Good. The next time I saw him after this conundrum about correlation, I told him about it. Drawing on his prodigious memory, he pointed to something he had written, which the index to his publications revealed as #792 (out of more than 2000): “Correlation for power functions”, Biometrics, 28 (Dec. 1972) 1127-1129. This remarks that it’s rather well known that the usual (product-moment) correlation coefficient is an inappropriate measure when the relation between two variables is not linear; and that the presumption is common, that “inappropriate” means that if the calculation is nevertheless done, the result will be a small value for the correlation coefficient. indicating lack of correlation. To the contrary, Good showed that the (usual, product-moment) correlation coefficient between x and x2 (x squared), or x3 (x cubed), etc., is close to 1 (typically >0.95).

In common parlance, by two things being “correlated” we mean that when one changes, the other changes in the same direction and in the same proportion; in other words, that the correlation is linear and the ratio of the two variables is a constant. But “the” correlation coefficient that is most commonly used, and included in such packages as EXCEL as the primary correlation coefficient, measures only whether two variables change in the same direction.

Consequently, statements about “correlation” are very likely to be misunderstood, misinterpreted, by the media, by the public — and by an unfortunately large proportion of doctors and scientists who “do their own statistics” using software packages and standard formulas.

This example is merely one isolated case of the abiding, deep, highly important problem of interpreting what “statistics” is supposed to tell us. Many years of informal education by Jack Good have taught me about some basic pitfalls that seem unsuspected by far too many people who quote and use “statistical data” like the ubiquitous “p” values.

The HIV/AIDS literature — like so many others — is full of articles in which particular relationships are said to be so at “p > 0.05”, or “p > 0.01”, or even  “p > 0.001” or less. The unwary reader sees  “p > 0.001” and accepts that there’s only one chance in 1000 that the claimed relationship is spurious. That’s an incorrect and misleading interpretation.

In the social sciences, the typical cut-off for “statistically significant” is “p > 0.05”, which is commonly given the interpretation that there’s less than 5 chances in 100, less than 1 in 20, that the claimed relationship is spurious, wrong, doesn’t exist. An apparently better interpretation is to emphasize that 1 in every 20 of such claimed relationships doesn’t really exist; of every 20 such claims made, 1 is wrong. But in truth, the chance is far greater than 1/20 that the claimed relationship at “p > 0.05” is wrong, simply doesn’t exist.

Those “p” values stem from an approach credited to, created by, the great early statistician R A Fisher, and it’s sometimes called “Fisherian”, though more often “frequentist”. That “p” stands for “probability”, and one of the things I learned from Jack Good is that the seemingly obvious meaning of “probability” is anything but clear, or obvious, or unambiguous. The frequentist meaning: toss a coin umpteen times, the probability that it comes up “heads” can be measured by counting the number of “heads” or “tails”. But that approach doesn’t cover such questions as, “What is the probability that God exists?”, which is a perfectly possible question with a perfectly intuitive meaning of “probability”.

Jack Good was one of the foremost pioneers in bringing into modern statistical applications the approach credited to the 18th-century Reverend Thomas Bayes. At any given moment, available evidence allows a judgment to be made about how probable the thing of interest is; that’s the “prior probability”, and it’s unashamedly subjective, different individuals can differ over what its value is, between 0 and 1. However, as experiments are done or observations made, evidence accumulates, and the prior probability is modified by a “Bayes Factor” that expresses the “weight of evidence” regarding the thing of interest. When sufficient evidence can be amassed, it doesn’t matter what was the prior probability with which one began, the calculated values will converge to whatever the “true” probability is.

The Bayesian approach is more technically demanding than the Fisherian, and the latter offers those ready-made software packages and formulas. But it’s not sufficiently recognized just how fallible the Fisherian approach really is and how misleading it can be.

The citing of “p” values is typically done to establish a particular hypothesis as “statistically significant”. But that’s not what the calculation means. It actually measures the probability that the “null hypothesis” is true, the null hypothesis being that the claimed relationship does not exist. If the “p” value is small, the null hypothesis is unlikely to be true, so you claim that your hypothesis is correspondingly likely to be correct.

There are several problems with this. The most obvious, and trivial, is that the choice of cut-off for “statistical significance” is arbitrary. Social science typically uses “p < 0.05”. The harder sciences demand <0.01 or less, depending on the particular situation. What must NEVER be done is to equate “statistically significant” with “true” — but, of course, that’s exactly what is done in just about every public dissemination of  “statistical facts”; and it’s implied as well in all too many research publications. One of Jack Good’s persistent campaigns was against the assignment of “p = 0” or “p = 1” to anything within the ken of human beings.

A worse problem with “p” values is their reliance on the null hypothesis: testing what one isn’t interested in instead of what one is interested in. Why is this done? Because it’s easy. The “normal distribution”, the “bell curve”, the Gaussian distribution, expresses the distribution of some measure around its average value when deviations from the average are owing purely to chance, when they occur randomly. For example, toss a penny 100 times, and most often you will NOT get 50 heads and 50 tails; but you can calculate exactly how likely you are “by chance” to get 49 H and 51 T, or 100 H, or whatever (provided, of course, the coin is perfectly balanced and not biased).

So an unspoken assumption is that the quantitative measure of the thing being investigated has a distribution like the normal curve. Some other distributions have also been studied, in particular the asymmetrical Poisson distribution, but this doesn’t help with the basic problem: If you want to use a frequentist or Fisherian approach, you need to know beforehand how the possible values of the variable you want to measure are distributed; and you can’t know that. Thereby there’s an inherent uncertainty, an unreliability, built in but that’s not commonly recognized, let alone openly acknowledged.

Nor is that all. When the null hypothesis is taken  to be disproved to some (subjective!) level of significance, the commonly drawn conclusion is that the hypothesis “being tested” is confirmed. But that presumes this hypothesis to be the only alternative to the null hypothesis; and there’s absolutely no warrant to assume that you thought of the only possible hypothesis capable of explaining the phenomenon you’re interested in, or the best one, the one most likely to be true.

So the Fisherian approach is beset by uncertainties, of which the most troubling are occult, hidden, not revealed when “p” values are cited. By contrast, the Bayesian approach places its subjective aspects in the open and up front, in the choice of a prior probability. Moreover, in calculating the Bayes Factor one is gauging probabilities relative to one’s hypothesis, and thereby one may be continually reminded that there might be other hypotheses equally or even better able to accommodate the data.

****************************

There are nowadays many expositions of Bayesian statistics and its superiority over the Fisherian because of the latter’s weaknesses. One exposition I found particularly readable is by R A J Matthews — “Facts versus Factions: The use and abuse of subjectivity in scientific research”, European Science and Environment Forum Working Paper (1998), reprinted (pp. 247–282) in J. Morris (ed.), Rethinking Risk and the Precautionary Principle, Butterworth, 2000. Matthews has also provided a concise overview of how misleading “p” values can be, increasingly so as the inherent (prior) probability of the thing you’re interested in differs from 50:50:

matthewsjse13
For example, if your initial belief is that there’s only 1 chance in 100 that something is true (moderate skepticism: an observation of it is a fluke 99 times out of 100), to establish it as real you need not a “p” value of 0.01 but of 0.000013  (1.3 x 10-5); in other words, p-values are a vastly insufficient criterion for estimating “statistical significance”.

***************************

I began this essay because Jack Good had just died, and I’m going to miss him enormously. I learned so much from him, not only about matters of probability and statistics. But I’ll mention one more example of the latter. “The birthday problem” is a commonly used demonstration of how wrong are our untrained estimates of probability: how many people do you need to gather in order to have a 50:50 chance that two of them will have the same birthday (day and month, of course, not year)? The answer, which surprises most people, is 23. A usefully simple explanation is that the number of pairs of people, which is what matters, increases much more rapidly than the number of people. Jack once extended that to the perhaps even more surprising numbers needed to have a 50:50 chance of 3 people with the same birthday (83), or 4 (170), or more — he gave a formula for calculating the result for any given situation (Good’s publication #2323, “Individual and global coincidences and a generalized birthday problem”, J. Statist. Comp. & Simul., 72 [2002] 18-21).

A concise but accurate, though understated, bio-obituary of Jack Good is at the Virginia Tech website . What you won’t find there is that this genius, utterly devoted to the life of the mind, was the best possible company, interested in and fascinating about everything under the sun, endlessly witty, able to find humor everywhere and to express everything humorously; a most cultured, erudite person, and also as gentle and civilized and courteous and without malice as one could ever find. It is an extraordinary blessing and gift to have known him.

Posted in experts, HIV/AIDS numbers, uncritical media | Tagged: , , , , , , | 2 Comments »