HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘conflicts of interest’

Science Studies 101: Why is HIV/AIDS “science” so unreliable?

Posted by Henry Bauer on 2008/07/18

Recent comments and e-mails reminded me of my career change, about 3 decades ago, from chemist to science-studies scholar. I had begun to wonder: What is it exactly that has made science so strikingly reliable?

(This is a long post. If you prefer to read it as a pdf—of course without hyperlinks to some of the on-line references—here it is: sciencestudies101).

Over the years, teaching chemistry and publishing research in electrochemistry, I had become increasingly aware that research practices and practitioners differ significantly from the ideal images that had attracted me (1). My education, like that of most scientists, had been strictly technical: chemistry, physics, math, biology, statistics. Recreational reading had added some history of chemistry, which also focused on the technical aspects—progress, discoveries, breakthroughs. We were not exposed to history, philosophy, or sociology of science in any meaningful way; nor are most people who study science even nowadays.

Mid-20th century, that lack of exposure to the context and significance of scientific activity was partly a matter of Zeitgeist, I recognize in hindsight. Philosophy of science was rather in flux. History of science as a whole was not so different in approach from the history of chemistry I had read—and perhaps not so different from how history in general was being taught: as milestones of achievement made by great individuals (largely, of course, men). Sociology of science had been founded only in the late 1930s. It was the 1960s before historians of science and philosophers of science began to engage seriously with one another, an engagement illustrated by Thomas Kuhn’s “The Structure of Scientific Revolutions”. Sociologists of science, too, began to engage with the historians and philosophers of science.

Following World War II, some scientists and engineers were looking for ways to make their knowledge an effective influence in public policy. Emblematic of this quest was the Bulletin of the Atomic Scientists. Starting about 1960, there were founded a variety of free-standing academic courses, a few research centers, and some organized academic programs under the rubric of “science and society”. These science-based ventures and the history-philosophy-based ones soon recognized each other as concerned with the same issues, yet even after a half-century, no truly integrated multi-disciplinary approach to understanding scientific activity has matured into an overall consensus (3). There persists a distinct internal division between those whose backgrounds are in the practice of science and technology and those whose backgrounds are in the humanities and social sciences (3, 4, 5). But despite differences over modes of interpretation and what is most worth looking into, there has accumulated a body of agreed facts about scientific activity. Most important for the present purpose is that many of those facts about science are at variance with commonplace conventional wisdom. Misconceptions about scientific activity are pervasive, not least among practicing scientists and medical practitioners.

I was lucky enough to participate in the early days of one of the first programs in the world in what has become known as “science and technology studies” (STS). At Virginia Tech, we began with physicists and chemists, economists and sociologists, mathematicians, statisticians, political scientists, and other as well, telling one another how we thought about science. We scientists learned to be less sure that our research reveals unchanging, objective, universal facts about the real world. The humanists and social scientists learned that the physical and biological sciences uncover facts about the real world that are more trustworthy than the knowledge accessible in such disciplines as sociology. We learned also how different are the viewpoints and intellectual values to which we are schooled in the various disciplines: in a sense, the differences are not so much intellectual as cultural ones (6,7, 8). I learned even more about such cultural differences between academic fields through having responsibility for the variety of disciplines embraced by a college of Arts & Sciences (10).

A salient fact is that “the scientific method” is more myth than reality (2, 11). What makes science relatively reliable is not any protocol or procedure that an individual scientist can follow, it is the interaction among practitioners as they critique one another’s claims, seek to build on them, and modify them, under constraints imposed by the concrete results of observations and experiments. Because individual biases predispose us to interpret the results of those observations and experiments in congenial ways, the chief safeguard of relative objectivity and reliability is honest, substantive peer-review by colleagues and competitors. That’s why I was grateful to “Fulano de Tal” when he pointed to errors in one of my posts: we rethinkers do not have the benefit of the organized peer-reviewing that is available—ideally speaking—in mainstream discourse [see Acknowledgment in More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death, 12 July 2008].

Because proper peer-review is so vital, conflicts of interest can be ruinously damaging (12, 13). Recommendations from the Food and Drug Administration or the Centers for Disease Control and Prevention are too often worthless—worse, they are sometimes positively dangerous (14)—because in latter days the advisory panels are being filled overwhelmingly with consultants for drug companies. That’s not generally enough appreciated, despite a large and authoritative literature on the subject (15-20).

Lacking familiarity with the findings of science studies, scientists are likely to be disastrous as administrators. It was a Nobel-Prize winner who relaxed the rules on conflicts of interest when he headed the National Institutes of Health, with quite predictably deplorable consequences (21). There have been many fine administrators of technical enterprises, but few had been themselves groundbreaking discoverers. To convince the scientific community of something that’s remarkable and novel, a scientist must be single-minded, captivated by the idea and willing to push it to the limit, against all demurrers—very bad qualities in an administrator; the latter ought to be a good listener, an adept engineer of compromises, an adroit manager able to stick to principles with an iron hand well masked by a velvet glove.

Those who have the egotism and dogmatic self-confidence to break new ground also need luck to be on their side, for—as Jack (I. J.) Good likes to point out—geniuses are cranks who happen to be right, and cranks are geniuses who happen to be wrong: in personal characteristics they are identical twins (22, 23). This role of luck has important implications: it’s why Nobel-Prize winners so rarely have comparable repeat successes, and why they should not be automatically regarded as the most insightful spokespeople on all and sundry matters.

HIV/AIDS vigilantes like to denigrate rethinkers for not having had their hands dirtied by direct research on the matters they discuss. Historians and sociologists of science, however, know that some of the most acclaimed breakthroughs were made by disciplinary outsiders, who were not blinkered and blinded by the contemporary paradigm (24, 25).

Self-styled “skeptics” (26) like to denigrate heterodox views as “pseudo-science” just because those views are heterodox, ignorant of the fact that there are no general criteria available by which to judge whether something is “scientific”; and they tend to be also ignorant of the fact that “scientific” cannot be translated as “true” (2, 27, 28).

Most relevant to the question of the “truth” of scientific knowledge is that science and scientists tend to occupy something of a pedestal of high prestige in contemporary society; perhaps because when we think of “science” we also tend to think “Einstein” and other such celebrated innovators. But nowadays there are a great many run-of-the-mill scientists, and even considerably incompetent ones: “Science, like the military, has its hordes of privates and non-coms, as well as its few heroes (from all ranks) and its few field marshals” (29)—which serves to explain, perhaps, some of the examples of sheer incompetence displayed in HIV/AIDS matters (30). Pertinent here is the fact that much medical research is carried out by people trained as doctors; training for physicians’ work is by no means training for research.


Those are some of the ways in which the commonplace conventional wisdom is wrong about science, but there are plenty more (24, 25, 32, 33). Those misconceptions play an important role in the hold that HIV/AIDS theory continues to have on practitioners, commentators, and observers, and they need to be pointed out in answer to the natural question often put to rethinkers: “But how could everyone be so wrong for so long?”

That’s why Part II of my book (31) has the title, “Lessons from History”, with chapters on “Missteps in modern medical science”, “How science progresses”, and “Research cartels and knowledge monopolies”. (About research cartels and knowledge monopolies, see also 34, 35). I’m enormously grateful to Virginia Tobiassen, the fine editor who helped me with the book, not least for the opportunity to augment the technical Part I with this Part II and the Part III that recounts the specific details of how HIV/AIDS theory went so wrong.

I’ve come to understand a great deal more since the book came out, among other things that perhaps the crucial turn on the wrong path came when Peter Duesberg’s rigorously researched and documented argument against HIV/AIDS theory went without comment, even in face of an editorial footnote promising such a response (36). Just as virologists ignored Duesberg’s substantive critiques, so epidemiologists ignored the informed critiques by Gordon Stewart (37) and immunologists ignored the fully documented questions raised by Robert Root-Bernstein (38); and just about everyone in mainstream fields ignored John Lauritsen’s insights into data analysis and his insider’s knowledge of interactions among gay men (39).

Peer review in HIV/AIDS “science” lapsed fatally from the beginning and has not yet recovered. Thus the only real safeguard of reliability was lost, it sometimes seems irretrievably.

1. “Are chemists not scientists?”—p. 19 ff. in reference 2.
2. Henry H. Bauer, Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, 1992.
3. —— , A consumer’s guide to science punditry, Chapter 2 in Science Today: Problem or Crisis?, ed. R. Levinson & J. Thomas, Routledge, 1997.
4. —— , Two kinds of knowledge: maps and stories, Journal of Scientific Exploration 9 (1995) 257-75.
5. —— , The anti-science phenomenon in science studies, Science Studies 9 (1996) 34-49; .
6 —— , Disciplines as cultures, Social Epistemology 4 (1990) 215-27.
7. —— , Barriers against interdisciplinarity: Implications for studies of Science, Technology, and Society (STS), Science, Technology, & Human Values 15 (1990) 105-19.
8. Chapters 11, 14, 15 (in particular) in reference 9.
9. Henry H. Bauer, Fatal Attractions: The Troubles with Science, Paraview, 2001.
10. Chapters 15, 16 in Henry H. Bauer (as ‘Josef Martin’), To Rise above Principle: The Memoirs of an Unreconstructed Dean, University of Illinois Press.
11. Chapters 4, 5 in reference 9.
12. Chapter 5 in reference 2.
13. Andrew Stark, Conflict of Interest in American Public Life, Harvard University Press, 2000.
14. Joel Kauffman, Malignant Medical Myths: Why Medical Treatment Causes 200,000 Deaths in the USA each Year, and How to Protect Yourself, Infinity Publishing, 2006.
15. John Abramson, Overdosed America: The Broken Promise of American Medicine, HarperCollins, 2004.
16. Marcia Angell, The Truth about the Drug Companies: How They Deceive Us and What To Do about It, Random House, 2004.
17. Jerry Avorn, Powerful Medicines: The Benefits, Risks, and Costs of Prescription Drugs, Knopf, 2004.
18. Merrill Goozner, The $800 Million Pill: The Truth behind the Cost of New Drugs, University of California Press, 2004.
19. Jerome Kassirer, On the Take: How Medicine’s Complicity with Big Business Can Endanger Your Health, Oxford University Press, 2004.
20. Sheldon Krimsky, Science in the Private Interest, Rowman and Littlefield, 2003.
21. David Willman, Los Angeles Times, 7 December 2003: “Stealth merger: Drug companies and government medical research”, p. A1; “Richard C. Eastman: A federal researcher who defended a client’s lethal drug”, p. A32; “John I. Gallin: A clinic chief’s desire to ‘learn about industry’”, p. A33; “Ronald N. Germain: A federal lab leader who made $1.4 million on the side”, p. A34; “Jeffrey M. Trent: A government accolade from a paid consultant”, p. A35; “Jeffrey Schlom: A cancer expert who aided studies using a drug wanted by a client”, p. A35.
22. Henry H. Bauer, “The fault lies in their stars, and not in them — when distinguished scientists lapse into pseudo-science”, Center for the Study of Science in Society, Virginia Tech, 8 February 1996; “The myth of the scientific method”, 3rd Annual Josephine L. Hopkins Foundation Workshop for Science Journalists, Cornell University, 26 June 1996.
23. Chapters 9, 10 in reference 9.
24. Ernest B. Hook (ed.), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002.
25. Henry H. Bauer, The progress of science and implications for science studies and for science policy, Perspectives on Science 11 (#2, 2003) 236-78.
26. The mother of all “skeptical” groups is CSICOP, publisher of Skeptical Inquirer; see George P. Hansen, “CSICOP and the Skeptics: an overview”, Journal of the American Society for Psychical Research, 86 (#1, 1992) 19-63.
27. Chapters 1-3, 6, 7 in reference 9.
28. Henry H. Bauer, Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other Heterodoxies, University of Illinois Press, 2001.
29. “Science as an institution”, pp. 303-6 in Henry H. Bauer, Beyond Velikovsky: The History of a Public Controversy, University of Illinois Press, 1984.
30. Pp. 110, 192, 195 in reference 31.
31. Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland, 2007.
32. Chapters 1, 4, 6, 7 in reference 2.
33. Chapter 12 in reference 9.
34. Chapter 13 in reference 9.
35. Henry H. Bauer, Science in the 21st century: knowledge monopolies and research cartels, Journal of Scientific Exploration 18 (2004) 643-60.
36. Peter H. Duesberg, Retroviruses as carcinogens and pathogens: expectations and reality, Cancer Research 47 (1987) 1199–220; Human immunodeficiency virus and acquired immunodeficiency syndrome: correlation but not causation, Proceedings of the National Academy of Sciences, 86 (1989) 755–64.
37. Gordon T. Stewart, A paradigm under pressure: HIV-AIDS model owes popularity to wide-spread censorship. Index on Censorship (UK) 3 (1999).
38. Robert Root-Bernstein, Rethinking AIDS—The Tragic Cost of Premature Consensus, Free Press, 1993.
39. John Lauritsen, The AIDS War: Propaganda, Profiteering and Genocide from the Medical-Industrial Complex, 1993, ASKLEPIOS. ISBN 0–943742–08–0.

Posted in experts, HIV does not cause AIDS, HIV skepticism | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 3 Comments »


Posted by Henry Bauer on 2007/12/15

Healthy people are told to take drugs known to cause severe “side” effects that some find literally intolerable (see WHAT HIV DRUGS DO, 15 December; OFFICIAL GUIDELINES FOR HIV TREATMENT, 14 December; ANTIRETROVIRAL DRUGS: HISTORY AND RHETORIC, 12 December; BEST TREATMENT FOR HIV: THIS YEAR’S ADVICE, LAST YEAR’S, OR NEXT YEAR’S? , 10 December 2007).

These debilitating drugs are recommended by people who have vested financial and career interests in them through connections with drug companies–a clear case of conflicts of interest. This in itself should discredit, thoroughly and completely, everything in the Treatment Guidelines featured in those recent posts.

The Panel responsible for the December 2007 Guidelines had two co-chairs, both with financial connections to drug companies. Only 3 of the 24 Panel members disclaimed such a conflict of interest. More details about the connections of some HIV experts to drug companies can be found at

Recall that the most important criterion for each recommendation is “expert opinion” (BEST TREATMENT…, 10 December). Perhaps the most basic fact about conflicts of interest is that they influence opinions.

The significance of conflicts of interest is widely ignored in contemporary affairs in the United States, and not only in science and medicine. Circumstances have become accepted as normal which, if occurring in other countries, would be easily recognized as utterly corrupt. Here’s a synopsis of Conflicts of Interest 101.

If I teach a class that has my daughter in it, no conscious effort on my part can ensure that she will be treated in exactly the same manner as the other students. Subconscious and unconscious emotions can influence my thoughts and actions in ways that I am unaware of and therefore cannot do anything about. No matter how consciously honorable and upright I may be, no matter how unfailingly rule-abiding and law-abiding and ethical in all my other interactions, there can be no guarantee that my daughter will not receive some degree of special treatment.


Once upon a time, this was widely understood. That time was not even so long ago. When President Eisenhower nominated Charlie Wilson, the CEO of General Motors, as Secretary of Defense, Wilson was asked about a possible conflict of interest. His response was,

“What’s good for General Motors is good for the country,
and what’s good for the country is good for General Motors”.

The naive absurdity of that response was so widely appreciated at the time that it was featured in cartoons and comic strips and late-night comedy shows. Collections of quotations and infamous sayings still feature it. Check it out: just Google “What’s good for General Motors”.

Nowadays, the experts who draw up Treatment Guidelines, and the people who choose those experts to make up the Panel, are telling us implicitly that what’s good for the drug companies and for those who consult for them and get grants and presents from them is also good for the rest of us, for the people who will be using the drugs and for those who will be paying for the drugs.

At a conscious level, no doubt these are all ethical, well-intentioned, upright people who would never allow possible financial gain to sway their expert scientific judgment. But they are no more able to control their subconscious drives than I could control mine about my daughter in class.

In fact, to venture some amateur psychological speculation, the most consciously upright and ethical people are also those most likely to succumb to subconscious corruption, because they are least on guard against the possibility. Furthermore, human beings are protected against acknowledging to themselves their own misdeeds, through the phenomenon of compartmentalization: we can and do hold incompatible views simultaneously, and we manage to do things while imagining not only that we are not doing them but even that we never would do them:
Think Jimmy Swaggart and other sinners who preach against sin.
Think ex-Senator-to-be Larry Craig.
Recall the sublimely naïve response of David Baltimore when asked about a potential conflict of interest: “I think people are entitled to ask that of me. But I do think the statements and decisions I make come from the highest sense of integrity” (Chemical & Engineering News, 15 March 1982, 12).

Of course he thinks that. They all do. WE all do.
That’s why we have to be protected against ourselves, against doing for unconscious reasons what we would not wish to do. That’s why the only protection against undue influence is to have no conflicts of interest at all.



Andrew Stark, in the book “Conflict of Interest in Public Life”, makes it very clear. There are three aspects of a conflict of interest:
1. The connections
2. The associated state of mind
3. Actions that may stem therefrom

For example:
1. My daughter is a student in the class I teach
2. My feelings about her and my attitudes toward grading
3. I assign a grade

1. X consults for a drug company
2. X’s judgments about the drug company’s products
3. X recommends wider use of the company’s product

Stark points out that there is no way of knowing or finding out, what my state of mind was when I awarded the grade, whether my love for my daughter influenced it one way or the other, favoring her or overcompensating against favoring her; and there is no way of knowing whether X’s attitude toward the drug company influenced the decision to recommend its product.

However, any number of studies have amply confirmed that on average, statistically, such situations do influence the resulting actions: those experts with conflicts of interest are more likely than others to judge favorably toward approval of a drug. For instance, when the question was, should Vioxx and the other drugs in this class, Bextra and Celebrex, be allowed to remain on the market, this is how the experts voted:

Vioxx: Full panel: 17 yes, 15 no. Panel without those with conflicts of interest: 8 yes, 14 no.
Bextra: Full panel: 17 yes, 13 no. Panel without those with conflicts of interest: 8 yes, 12 no.
Celebrex: Full panel: 31 yes, 1 no. Panel without those with conflicts of interest: 21 yes, 1 no.
(Goozner, AARP Bulletin, May 2006, p. 10, citing New York Times, 25 February 2005).

Industry sponsorship made studies 4 to 8 times more likely to be favorable to beverage companies (Chronicle of Higher Education, 19 January 2007, A27-8). “Psychiatric drugs fare favorably when companies pay for studies” (Elias, USA Today, 25 May 2006, 1A). Those are just a few of the innumerable such social-science studies all confirming what plain common sense already knows. Doctors with financial stakes in analytical labs prescribe more lab tests than those without such interests. And so on and on.



Many individuals, institutions, and organizations simply don’t understand that when they blather about “apparent” or “negligible” or “potential” conflicts of interest. There are no such things. A conflict of interest is Stark’s aspect 1: the connections.
Either there is a connection or there isn’t.
“Apparent”, “negligible”, “potential” are intended to address stage 3, to express doubt as to whether the connections will actually exert an influence. As Stark points out, that cannot be known. What is known, beyond any doubt, is that conflicts of interest exert a statistical influence. No matter how hard human beings may try, they cannot know or control or counteract their subconscious or unconscious motives.

To recuse people who have conflicts of interest, to exclude them from particular activities, is not to accuse them of being consciously swayed by those conflicts of interest, still less is it to accuse them of being consciously self-serving evil-doers. Recusing people with conflicts of interest is to their own benefit, to protect them from doing what they would not consciously wish to do. Recusing people with conflicts of interest is simply an acknowledgment of the fact that paths to Hell are paved with good intentions.

* * * * * *

Some of the above I’ve taken from my seminar “Ethics in science” posted at and being reprinted in “Against the Tide: A Critical Review by Scientists of how Physics and Astronomy get done”, M. López-Corredoira & C. Castro (Eds.), 2007 (in press).

While the titles of some of the following books may suggest sensationalist muckraking, that is far from the case. All the authors are respectable mainstream figures: senior tenured faculty, a couple of former editors of top medical journals, a former university president, and several respected journalists. All the books are written in matter-of-fact prose with proper citation of sources.

For corruption at the National Institutes of Health through conflicts of interest, see the series of articles by David Willman in the Los Angeles Times, 7 December 2003, pp. A1, A32-35, and 22 December 2004.

For further reading on the corruption of science and medicine in general, start with Daniel Greenberg (2001) “Science, Money and Politics: Political Triumph and Ethical Erosion” and Sheldon Krimsky (2003) “Science in the Private Interest”.

For the corrupting influence of Big Pharma, see John Abramson (2004) “Overdosed America: The Broken Promise of American Medicine”; Marcia Angell (2004) “The Truth about the Drug Companies: How They Deceive Us and What To Do about It”; Jerry Avorn (2004) “Powerful Medicines: The Benefits, Risks, and Costs of Prescription Drugs”; Merrill Goozner (2004) “The $800 Million Pill: The Truth behind the Cost of New Drugs”; Jerome Kassirer (2004) “On The Take: How Medicine’s Complicity with Big Business Can Endanger Your Health”.

For the commercial corruption of contemporary academe, see for example Derek Bok (2003) “Universities in the Marketplace: The Commercialization of Higher Education” and Jennifer Washburn (2005) “University, Inc.: The Corporate Corruption of American Higher Education”.

Posted in antiretroviral drugs, experts, prejudice | Tagged: , | 2 Comments »