HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Science Studies 101: Why is HIV/AIDS “science” so unreliable?

Posted by Henry Bauer on 2008/07/18

Recent comments and e-mails reminded me of my career change, about 3 decades ago, from chemist to science-studies scholar. I had begun to wonder: What is it exactly that has made science so strikingly reliable?

(This is a long post. If you prefer to read it as a pdf—of course without hyperlinks to some of the on-line references—here it is: sciencestudies101).

Over the years, teaching chemistry and publishing research in electrochemistry, I had become increasingly aware that research practices and practitioners differ significantly from the ideal images that had attracted me (1). My education, like that of most scientists, had been strictly technical: chemistry, physics, math, biology, statistics. Recreational reading had added some history of chemistry, which also focused on the technical aspects—progress, discoveries, breakthroughs. We were not exposed to history, philosophy, or sociology of science in any meaningful way; nor are most people who study science even nowadays.

Mid-20th century, that lack of exposure to the context and significance of scientific activity was partly a matter of Zeitgeist, I recognize in hindsight. Philosophy of science was rather in flux. History of science as a whole was not so different in approach from the history of chemistry I had read—and perhaps not so different from how history in general was being taught: as milestones of achievement made by great individuals (largely, of course, men). Sociology of science had been founded only in the late 1930s. It was the 1960s before historians of science and philosophers of science began to engage seriously with one another, an engagement illustrated by Thomas Kuhn’s “The Structure of Scientific Revolutions”. Sociologists of science, too, began to engage with the historians and philosophers of science.

Following World War II, some scientists and engineers were looking for ways to make their knowledge an effective influence in public policy. Emblematic of this quest was the Bulletin of the Atomic Scientists. Starting about 1960, there were founded a variety of free-standing academic courses, a few research centers, and some organized academic programs under the rubric of “science and society”. These science-based ventures and the history-philosophy-based ones soon recognized each other as concerned with the same issues, yet even after a half-century, no truly integrated multi-disciplinary approach to understanding scientific activity has matured into an overall consensus (3). There persists a distinct internal division between those whose backgrounds are in the practice of science and technology and those whose backgrounds are in the humanities and social sciences (3, 4, 5). But despite differences over modes of interpretation and what is most worth looking into, there has accumulated a body of agreed facts about scientific activity. Most important for the present purpose is that many of those facts about science are at variance with commonplace conventional wisdom. Misconceptions about scientific activity are pervasive, not least among practicing scientists and medical practitioners.

I was lucky enough to participate in the early days of one of the first programs in the world in what has become known as “science and technology studies” (STS). At Virginia Tech, we began with physicists and chemists, economists and sociologists, mathematicians, statisticians, political scientists, and other as well, telling one another how we thought about science. We scientists learned to be less sure that our research reveals unchanging, objective, universal facts about the real world. The humanists and social scientists learned that the physical and biological sciences uncover facts about the real world that are more trustworthy than the knowledge accessible in such disciplines as sociology. We learned also how different are the viewpoints and intellectual values to which we are schooled in the various disciplines: in a sense, the differences are not so much intellectual as cultural ones (6,7, 8). I learned even more about such cultural differences between academic fields through having responsibility for the variety of disciplines embraced by a college of Arts & Sciences (10).

A salient fact is that “the scientific method” is more myth than reality (2, 11). What makes science relatively reliable is not any protocol or procedure that an individual scientist can follow, it is the interaction among practitioners as they critique one another’s claims, seek to build on them, and modify them, under constraints imposed by the concrete results of observations and experiments. Because individual biases predispose us to interpret the results of those observations and experiments in congenial ways, the chief safeguard of relative objectivity and reliability is honest, substantive peer-review by colleagues and competitors. That’s why I was grateful to “Fulano de Tal” when he pointed to errors in one of my posts: we rethinkers do not have the benefit of the organized peer-reviewing that is available—ideally speaking—in mainstream discourse [see Acknowledgment in More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death, 12 July 2008].

Because proper peer-review is so vital, conflicts of interest can be ruinously damaging (12, 13). Recommendations from the Food and Drug Administration or the Centers for Disease Control and Prevention are too often worthless—worse, they are sometimes positively dangerous (14)—because in latter days the advisory panels are being filled overwhelmingly with consultants for drug companies. That’s not generally enough appreciated, despite a large and authoritative literature on the subject (15-20).

Lacking familiarity with the findings of science studies, scientists are likely to be disastrous as administrators. It was a Nobel-Prize winner who relaxed the rules on conflicts of interest when he headed the National Institutes of Health, with quite predictably deplorable consequences (21). There have been many fine administrators of technical enterprises, but few had been themselves groundbreaking discoverers. To convince the scientific community of something that’s remarkable and novel, a scientist must be single-minded, captivated by the idea and willing to push it to the limit, against all demurrers—very bad qualities in an administrator; the latter ought to be a good listener, an adept engineer of compromises, an adroit manager able to stick to principles with an iron hand well masked by a velvet glove.

Those who have the egotism and dogmatic self-confidence to break new ground also need luck to be on their side, for—as Jack (I. J.) Good likes to point out—geniuses are cranks who happen to be right, and cranks are geniuses who happen to be wrong: in personal characteristics they are identical twins (22, 23). This role of luck has important implications: it’s why Nobel-Prize winners so rarely have comparable repeat successes, and why they should not be automatically regarded as the most insightful spokespeople on all and sundry matters.

HIV/AIDS vigilantes like to denigrate rethinkers for not having had their hands dirtied by direct research on the matters they discuss. Historians and sociologists of science, however, know that some of the most acclaimed breakthroughs were made by disciplinary outsiders, who were not blinkered and blinded by the contemporary paradigm (24, 25).

Self-styled “skeptics” (26) like to denigrate heterodox views as “pseudo-science” just because those views are heterodox, ignorant of the fact that there are no general criteria available by which to judge whether something is “scientific”; and they tend to be also ignorant of the fact that “scientific” cannot be translated as “true” (2, 27, 28).

Most relevant to the question of the “truth” of scientific knowledge is that science and scientists tend to occupy something of a pedestal of high prestige in contemporary society; perhaps because when we think of “science” we also tend to think “Einstein” and other such celebrated innovators. But nowadays there are a great many run-of-the-mill scientists, and even considerably incompetent ones: “Science, like the military, has its hordes of privates and non-coms, as well as its few heroes (from all ranks) and its few field marshals” (29)—which serves to explain, perhaps, some of the examples of sheer incompetence displayed in HIV/AIDS matters (30). Pertinent here is the fact that much medical research is carried out by people trained as doctors; training for physicians’ work is by no means training for research.


Those are some of the ways in which the commonplace conventional wisdom is wrong about science, but there are plenty more (24, 25, 32, 33). Those misconceptions play an important role in the hold that HIV/AIDS theory continues to have on practitioners, commentators, and observers, and they need to be pointed out in answer to the natural question often put to rethinkers: “But how could everyone be so wrong for so long?”

That’s why Part II of my book (31) has the title, “Lessons from History”, with chapters on “Missteps in modern medical science”, “How science progresses”, and “Research cartels and knowledge monopolies”. (About research cartels and knowledge monopolies, see also 34, 35). I’m enormously grateful to Virginia Tobiassen, the fine editor who helped me with the book, not least for the opportunity to augment the technical Part I with this Part II and the Part III that recounts the specific details of how HIV/AIDS theory went so wrong.

I’ve come to understand a great deal more since the book came out, among other things that perhaps the crucial turn on the wrong path came when Peter Duesberg’s rigorously researched and documented argument against HIV/AIDS theory went without comment, even in face of an editorial footnote promising such a response (36). Just as virologists ignored Duesberg’s substantive critiques, so epidemiologists ignored the informed critiques by Gordon Stewart (37) and immunologists ignored the fully documented questions raised by Robert Root-Bernstein (38); and just about everyone in mainstream fields ignored John Lauritsen’s insights into data analysis and his insider’s knowledge of interactions among gay men (39).

Peer review in HIV/AIDS “science” lapsed fatally from the beginning and has not yet recovered. Thus the only real safeguard of reliability was lost, it sometimes seems irretrievably.

1. “Are chemists not scientists?”—p. 19 ff. in reference 2.
2. Henry H. Bauer, Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, 1992.
3. —— , A consumer’s guide to science punditry, Chapter 2 in Science Today: Problem or Crisis?, ed. R. Levinson & J. Thomas, Routledge, 1997.
4. —— , Two kinds of knowledge: maps and stories, Journal of Scientific Exploration 9 (1995) 257-75.
5. —— , The anti-science phenomenon in science studies, Science Studies 9 (1996) 34-49; .
6 —— , Disciplines as cultures, Social Epistemology 4 (1990) 215-27.
7. —— , Barriers against interdisciplinarity: Implications for studies of Science, Technology, and Society (STS), Science, Technology, & Human Values 15 (1990) 105-19.
8. Chapters 11, 14, 15 (in particular) in reference 9.
9. Henry H. Bauer, Fatal Attractions: The Troubles with Science, Paraview, 2001.
10. Chapters 15, 16 in Henry H. Bauer (as ‘Josef Martin’), To Rise above Principle: The Memoirs of an Unreconstructed Dean, University of Illinois Press.
11. Chapters 4, 5 in reference 9.
12. Chapter 5 in reference 2.
13. Andrew Stark, Conflict of Interest in American Public Life, Harvard University Press, 2000.
14. Joel Kauffman, Malignant Medical Myths: Why Medical Treatment Causes 200,000 Deaths in the USA each Year, and How to Protect Yourself, Infinity Publishing, 2006.
15. John Abramson, Overdosed America: The Broken Promise of American Medicine, HarperCollins, 2004.
16. Marcia Angell, The Truth about the Drug Companies: How They Deceive Us and What To Do about It, Random House, 2004.
17. Jerry Avorn, Powerful Medicines: The Benefits, Risks, and Costs of Prescription Drugs, Knopf, 2004.
18. Merrill Goozner, The $800 Million Pill: The Truth behind the Cost of New Drugs, University of California Press, 2004.
19. Jerome Kassirer, On the Take: How Medicine’s Complicity with Big Business Can Endanger Your Health, Oxford University Press, 2004.
20. Sheldon Krimsky, Science in the Private Interest, Rowman and Littlefield, 2003.
21. David Willman, Los Angeles Times, 7 December 2003: “Stealth merger: Drug companies and government medical research”, p. A1; “Richard C. Eastman: A federal researcher who defended a client’s lethal drug”, p. A32; “John I. Gallin: A clinic chief’s desire to ‘learn about industry’”, p. A33; “Ronald N. Germain: A federal lab leader who made $1.4 million on the side”, p. A34; “Jeffrey M. Trent: A government accolade from a paid consultant”, p. A35; “Jeffrey Schlom: A cancer expert who aided studies using a drug wanted by a client”, p. A35.
22. Henry H. Bauer, “The fault lies in their stars, and not in them — when distinguished scientists lapse into pseudo-science”, Center for the Study of Science in Society, Virginia Tech, 8 February 1996; “The myth of the scientific method”, 3rd Annual Josephine L. Hopkins Foundation Workshop for Science Journalists, Cornell University, 26 June 1996.
23. Chapters 9, 10 in reference 9.
24. Ernest B. Hook (ed.), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002.
25. Henry H. Bauer, The progress of science and implications for science studies and for science policy, Perspectives on Science 11 (#2, 2003) 236-78.
26. The mother of all “skeptical” groups is CSICOP, publisher of Skeptical Inquirer; see George P. Hansen, “CSICOP and the Skeptics: an overview”, Journal of the American Society for Psychical Research, 86 (#1, 1992) 19-63.
27. Chapters 1-3, 6, 7 in reference 9.
28. Henry H. Bauer, Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other Heterodoxies, University of Illinois Press, 2001.
29. “Science as an institution”, pp. 303-6 in Henry H. Bauer, Beyond Velikovsky: The History of a Public Controversy, University of Illinois Press, 1984.
30. Pp. 110, 192, 195 in reference 31.
31. Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland, 2007.
32. Chapters 1, 4, 6, 7 in reference 2.
33. Chapter 12 in reference 9.
34. Chapter 13 in reference 9.
35. Henry H. Bauer, Science in the 21st century: knowledge monopolies and research cartels, Journal of Scientific Exploration 18 (2004) 643-60.
36. Peter H. Duesberg, Retroviruses as carcinogens and pathogens: expectations and reality, Cancer Research 47 (1987) 1199–220; Human immunodeficiency virus and acquired immunodeficiency syndrome: correlation but not causation, Proceedings of the National Academy of Sciences, 86 (1989) 755–64.
37. Gordon T. Stewart, A paradigm under pressure: HIV-AIDS model owes popularity to wide-spread censorship. Index on Censorship (UK) 3 (1999).
38. Robert Root-Bernstein, Rethinking AIDS—The Tragic Cost of Premature Consensus, Free Press, 1993.
39. John Lauritsen, The AIDS War: Propaganda, Profiteering and Genocide from the Medical-Industrial Complex, 1993, ASKLEPIOS. ISBN 0–943742–08–0.

3 Responses to “Science Studies 101: Why is HIV/AIDS “science” so unreliable?”

  1. Martin said

    That was a excellent posting. One of the items you mentioned was one of incompetent scientists. I would guess that would seem odd to the lay person. The lay person is usually in awe of the kind of training scientists require in order to pursue careers as scientists. The curriculums, especially in the hard sciences like physics, mathematics, chemistry, etc., are especially rigorous and would be overwhelming to the lay person who gets freaked out by an equation like: y = mx + b. Higher math like calculus, differential equations, algebraic topology, statistics that may be required of scientists to just get through to become scientists are too difficult even to explain to the lay person. I remember how impressed I was when I saw the movie when I was 8 years old, “The Day the Earth Stood Still”, when Klaatu’s character, played by Michael Rennie, looked into Professor Barnhardt’s (played by Sam Jaffe) office seeing what appeared to me very impressive mathematical symbols that I didn’t understand (it was the 3-body problem in celestial mechanics) and like child’s play solving the problem.
    But anyway, getting back to incompetent scientists, the ways scientists are incompetent — the ones who are the worker bees that foul experiments: poorly planned methodology, or just poor methodology, bad analysis, preconceived ideas, dropping data that didn’t agree with how they wanted an experiment to turn out, etc. In AIDS research, they get rid of scientists who ask the wrong questions — one can argue to the ends of the earth how many angels can dance on the head of a pin, but they can’t question the existence of angels.

  2. adrian said

    Good article.

    Ironically, what makes science science is the social process of peer review:
    “…interaction among practitioners as they critique one another’s claims, seek to build on them,…”

    I do hope you are familiar with Steven Epstein’s, “Impure Science”.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s