HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘sociology of science’

Scientific illiteracy, the media, science pundits, governments, and HIV/AIDS

Posted by Henry Bauer on 2009/01/15

“HIV/AIDS” is one of those “hard cases” that illustrate how disastrous can be the scientific illiteracy that is so widespread among science journalists (and among general journalists even more so), among self-appointed science pundits, among the science advisors to governments, among policy makers, and — last but far from least — within the scientific community itself.

Scientists often like to say that no one can understand science without actually having done some. There’s important truth to that. However, it’s also importantly true that you can’t understand science if all you know about it comes from having done some science. Working scientists learn a great deal about the leaves, roots, warts and microscopic components of the particular tree they happen to get fascinated by, but there’s nothing about doing science that automatically brings insight into the whole tree, let alone the forest of scientific activity, let alone the wider societal context with which that forest interacts.

A growing sense of the need for a comprehensive and contextual understanding of the proper place of science and technology in a modern society stimulated the emergence, during the last half century or so, of what has become the almost established yet little known field of “science studies” or “science and technology studies” (STS) — almost unknown outside academe, and within academe about as little known, understood, or appreciated as are, say, departments of religion or theology or religious studies. Two streams of endeavor are at the foundations of STS. One came from technologists, scientists, political scientists, and others concerned that inventions like the atomic bomb, with incalculable potential impact on humanity, could be handled sensibly only by a polity and governance that understands science and technology in all their aspects and implications. The second stream emerged from a recognition among philosophers of science, historians of science, and sociologists of science that their disciplinary insights were inadequate to grasp the totality of scientific activity and scientific knowledge and scientific theories. Thus STS is an inescapably interdisciplinary endeavor, fraught with all the extreme difficulties that attend attempts to bring coherence to a multidisciplinary  collection of biases, cultures, and ideologies. Still, despite the lack of a consensual governing paradigm within STS, a few insights are shared across the spectrum of differing approaches, for example:

1. Science and technology are not the same thing. Advances in science will not necessarily lead to important technology.

2. Future knowledge is unforeseeable; future science is unforeseeable. It is paradoxical and counterproductive to aim to support potential breakthroughs by awarding funds to ‘projects’ assessed in the light of the current conventional wisdom.

3. Specific technologies can sometimes be foreseen, but the implications of technology are unforeseeable; and it is virtually certain that any new technology will have unforeseen, unforeseeable, and unwished-for consequences.

4. Because living systems, including human societies, harbor complex interrelationships, even apparently simple individual factors have a multitude of consequences. There is no such feasible thing as ‘only’ wiping out mosquitoes, for example — other living species will be affected; nor can one ‘only’ clean up the environment — the standard of living measured in conventional economic terms will be affected; nor will there be a miracle drug to lower blood cholesterol and leave the rest of a human organism working as before; nor will it make sense to transplant organs until the immune system is understood rather than seen as an enemy to be immobilized.

5. Some of the most worrisome social questions cannot be answered unequivocally. The best available evidence in social matters will always be statistical, and statistical inferences always have a residual uncertainty. Above all: correlations do not signify causation.

6. Science is fallible — individual psychology, social forces, and historical influences affect the direction and performance of science. Nevertheless, science is enormously reliable under normal circumstances.

7. The distinction is vital between frontier science, where much is uncertain, and textbook science, where relatively little is uncertain (within the boundary conditions under which the knowledge was gained). Humanists and social scientists tend to understand the fallibility and contingency of science at the frontier, but tend also to have little if any feel for the enormous reliability of thoroughly tested science; by contrast, engineers and scientists know the enormous reliability of what’s in their texts and reference works without realizing that the same reliability does not pertain to recent discoveries, let alone to extrapolations from them. (For a survey of viewpoints within STS, see A Consumer’s Guide to Science Punditry.)

8. Science is a social activity. As such, it is inherently conservative. Breakthroughs occur despite scientists, not because of them: they occur when reality refuses to have itself molded to current theories. At the same time, the reliability of science depends on the conservatism of science.

It should be evident that at least some of this understanding contradicts directly what “everyone knows” about science — “everyone” including people who imagine themselves competent to hold forth about matters scientific.

Perhaps most pertinent to HIV/AIDS is the little-recognized distinction between frontier science and textbook science. Everything in HIV/AIDS theory is as uncertain and fallible as anything that has been newly observed in a laboratory or in a doctor’s office. “AIDS” was without precedent, and even the now-unquestioned interpretation that it represents a general “immune deficiency” was never established by differential diagnosis, let alone by continued assessment of evidence. Understanding of the immune system at the cellular level was barely beginning in the early 1980s, and the now-unquestioned interpretation that a deficiency of CD4+ cells is crucial has never been established by continued assessment of evidence. Retrovirology was a new specialty. “HIV” is credited with a whole range of unique characteristics for which independent evidence has never been produced. Antiretroviral drugs are introduced with the barest nod to testing their safety and efficacy, and the only valid approach — blinded clinical trials against placebo — is not used.

Despite how tentative remains the basis for much of HIV/AIDS activity, that researchers treat their results as definite until proven otherwise is not particular to HIV/AIDS, it’s in the nature of scientific activity; as also is the fact that researchers treat new publications by others as to-be-relied-upon until proven otherwise. In science, the kudos go to those who push ahead, not to skeptics who try to clean up behind the ground-breakers, who question and quibble and try to prove others wrong in the endeavor to bring genuine reliability to the whole enterprise. What happened with HIV/AIDS is not, on the whole, particularly atypical in principle, it stands out “only” in magnitude and the terrible harm done to many people. All the incentives in science point to going with the herd, and for every maverick who is responsible for an eventual scientific revolution there are untold would-be mavericks whose careers get nowhere. Most scientists, as in most other professions, choose to follow a low-risk path that guarantees a respectably successful career. All budding researchers know that the grants go to those who base their proposals on the prevailing mainstream consensus. Whistleblowers are no more welcome in science than elsewhere. As Sharon Begley noted in a recent article,  even when scientists write about having changed their minds, it’s rare that they changed them significantly — the typical “changes” are modifications that overturn no apple-carts. That overall approach, that routine functioning of the scientific system, has served science and society well in most cases, and it’s whistling in the wind to suggest otherwise. STS understands that the big advances come from the headstrong, ambitious, creative bulls-in-the-china-shops among researchers, not from the scholarly, carefully appraising, skeptical scientists who think before they leap. Science is not done by “the scientific method”, even if it might seem like that by long superficial hindsight that overlooks all the trial-and-error mis-steps along the way — see Scientific Literacy and the Myth of the Scientific Method .

The basic problem with HIV/AIDS is that the scientific system that works so well on routine tasks is wide open to catastrophe when something quite new crops up. It’s somewhat analogous to the trade-offs between freedom and security in a democratic society. To ensure that no terrorist events could ever happen, society would have to be as controlled as in the Soviet Union, Nazi Germany, or the dictatorships envisaged by George Orwell; but to allow complete freedom to all would mean little or no safety for anyone.

So one cannot blame the scientific system as such for the tragic mistake of HIV/AIDS and thereupon conclude that the system needs to be changed in some fundamental way. What went wrong is owing only in part to the virologists and their cohorts and the official institutions. There have certainly been rather spectacular displays of incompetence, sloppiness, apparently willful ignoring of evidence, and the like, on the part of a few identifiable individuals. Such institutions as NIH and CDC have displayed bureaucratic deficiencies much more than accountability, competence, efficiency, or due diligence in exercising oversight. Nevertheless, I think a great part of the blame can justifiably be laid at the feet of hordes of ignorant science pundits and science administrators. If there’s one thing that those who manage science and grants should know, the very same thing that every science journalist and science writer should know, it’s the difference between relatively reliable textbook science and utterly unreliable frontier science. REAL SCIENCE ISN’T NEWS.  A fundamental problem is that reporting science in a responsible way is incompatible with the media concentration on what’s new and remarkable. No “scientific breakthrough” announced by an individual researcher, a laboratory, an official agency, or a corporation should be accepted with more trust than should be granted to the promises made by campaigning politicians. Even when an announcement is made in relatively good faith, with subjective belief in its essential accuracy, it’s at least partly self-serving and, most important, not informed by the understanding that no new “discovery” can be relied on until it’s been re-discovered and re-re-discovered and has served to guide, successfully, a certain amount of further research that depends on the validity of that claimed discovery.

That’s not difficult to understand. The reasons for it are not difficult to understand. Indeed, every science pundit is likely to hold forth at length about the necessity of peer review. Yet that’s lip service only, not applied in practice. Routinely, press releases from drug companies, directors of federal laboratories, individual researchers and laboratories, are treated as reliable and worthy of disseminating to the general public without further ado. Press releases from politicians and political parties are treated with well-deserved skepticism, but not anything that has to do with “science” or “medicine”; in those connections, our media swallow and regurgitate conscientiously what in better days most people would have recognized immediately as snake oil — say, a vaccine to safeguard against cervical cancer, peddled on the basis that a small number of strains of a particular virus are often associated with cervical cancers. Where’s the understanding that association doesn’t prove causation? Where’s the skepticism that an association with a small percentage of something makes causation even a plausible interpretation? Where is the collective memory of the “gene for breast cancer”, that’s associated with a small percentage of breast cancers but whose detection makes women contemplate disfiguring major surgery as prophylactic?

Illiteracy about the nature of scientific activity is a clear and present danger in this self-styled “scientific” and “modern” age, and innumerable “science bloggers” and science pundits illustrate that daily in their uninformed herd-like comments about HIV/AIDS. Scientific illiteracy isn’t about knowing what a molecule is, or a retrovirus; it’s not realizing that science isn’t done by a “scientific method” ; it’s about knowing that science can’t be guaranteed to deliver what it promises any more than a politician can; it’s about realizing that scientists are super-specialists blinkered to anything outside their immediate interest, and that the best people to consult about science policy and the assessment of a scientific consensus are historians of science, sociologists of science, ethicists and philosophers of science, especially those who have also done some science themselves at one time or another. Presidential science advisors and congressional advisors about science and technology should be drawn to a major extent from the young community of STS — as was indeed the case with the congressional Office of Technology Assessment, which was disbanded out of nothing short of political spite after partisan disputes over access to it.

Posted in experts, HIV skepticism, uncritical media | Tagged: , , , , , , , , , , , | Leave a Comment »

Science Studies 101: Why is HIV/AIDS “science” so unreliable?

Posted by Henry Bauer on 2008/07/18

Recent comments and e-mails reminded me of my career change, about 3 decades ago, from chemist to science-studies scholar. I had begun to wonder: What is it exactly that has made science so strikingly reliable?

(This is a long post. If you prefer to read it as a pdf—of course without hyperlinks to some of the on-line references—here it is: sciencestudies101).

Over the years, teaching chemistry and publishing research in electrochemistry, I had become increasingly aware that research practices and practitioners differ significantly from the ideal images that had attracted me (1). My education, like that of most scientists, had been strictly technical: chemistry, physics, math, biology, statistics. Recreational reading had added some history of chemistry, which also focused on the technical aspects—progress, discoveries, breakthroughs. We were not exposed to history, philosophy, or sociology of science in any meaningful way; nor are most people who study science even nowadays.

Mid-20th century, that lack of exposure to the context and significance of scientific activity was partly a matter of Zeitgeist, I recognize in hindsight. Philosophy of science was rather in flux. History of science as a whole was not so different in approach from the history of chemistry I had read—and perhaps not so different from how history in general was being taught: as milestones of achievement made by great individuals (largely, of course, men). Sociology of science had been founded only in the late 1930s. It was the 1960s before historians of science and philosophers of science began to engage seriously with one another, an engagement illustrated by Thomas Kuhn’s “The Structure of Scientific Revolutions”. Sociologists of science, too, began to engage with the historians and philosophers of science.

Following World War II, some scientists and engineers were looking for ways to make their knowledge an effective influence in public policy. Emblematic of this quest was the Bulletin of the Atomic Scientists. Starting about 1960, there were founded a variety of free-standing academic courses, a few research centers, and some organized academic programs under the rubric of “science and society”. These science-based ventures and the history-philosophy-based ones soon recognized each other as concerned with the same issues, yet even after a half-century, no truly integrated multi-disciplinary approach to understanding scientific activity has matured into an overall consensus (3). There persists a distinct internal division between those whose backgrounds are in the practice of science and technology and those whose backgrounds are in the humanities and social sciences (3, 4, 5). But despite differences over modes of interpretation and what is most worth looking into, there has accumulated a body of agreed facts about scientific activity. Most important for the present purpose is that many of those facts about science are at variance with commonplace conventional wisdom. Misconceptions about scientific activity are pervasive, not least among practicing scientists and medical practitioners.

I was lucky enough to participate in the early days of one of the first programs in the world in what has become known as “science and technology studies” (STS). At Virginia Tech, we began with physicists and chemists, economists and sociologists, mathematicians, statisticians, political scientists, and other as well, telling one another how we thought about science. We scientists learned to be less sure that our research reveals unchanging, objective, universal facts about the real world. The humanists and social scientists learned that the physical and biological sciences uncover facts about the real world that are more trustworthy than the knowledge accessible in such disciplines as sociology. We learned also how different are the viewpoints and intellectual values to which we are schooled in the various disciplines: in a sense, the differences are not so much intellectual as cultural ones (6,7, 8). I learned even more about such cultural differences between academic fields through having responsibility for the variety of disciplines embraced by a college of Arts & Sciences (10).

A salient fact is that “the scientific method” is more myth than reality (2, 11). What makes science relatively reliable is not any protocol or procedure that an individual scientist can follow, it is the interaction among practitioners as they critique one another’s claims, seek to build on them, and modify them, under constraints imposed by the concrete results of observations and experiments. Because individual biases predispose us to interpret the results of those observations and experiments in congenial ways, the chief safeguard of relative objectivity and reliability is honest, substantive peer-review by colleagues and competitors. That’s why I was grateful to “Fulano de Tal” when he pointed to errors in one of my posts: we rethinkers do not have the benefit of the organized peer-reviewing that is available—ideally speaking—in mainstream discourse [see Acknowledgment in More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death, 12 July 2008].

Because proper peer-review is so vital, conflicts of interest can be ruinously damaging (12, 13). Recommendations from the Food and Drug Administration or the Centers for Disease Control and Prevention are too often worthless—worse, they are sometimes positively dangerous (14)—because in latter days the advisory panels are being filled overwhelmingly with consultants for drug companies. That’s not generally enough appreciated, despite a large and authoritative literature on the subject (15-20).

Lacking familiarity with the findings of science studies, scientists are likely to be disastrous as administrators. It was a Nobel-Prize winner who relaxed the rules on conflicts of interest when he headed the National Institutes of Health, with quite predictably deplorable consequences (21). There have been many fine administrators of technical enterprises, but few had been themselves groundbreaking discoverers. To convince the scientific community of something that’s remarkable and novel, a scientist must be single-minded, captivated by the idea and willing to push it to the limit, against all demurrers—very bad qualities in an administrator; the latter ought to be a good listener, an adept engineer of compromises, an adroit manager able to stick to principles with an iron hand well masked by a velvet glove.

Those who have the egotism and dogmatic self-confidence to break new ground also need luck to be on their side, for—as Jack (I. J.) Good likes to point out—geniuses are cranks who happen to be right, and cranks are geniuses who happen to be wrong: in personal characteristics they are identical twins (22, 23). This role of luck has important implications: it’s why Nobel-Prize winners so rarely have comparable repeat successes, and why they should not be automatically regarded as the most insightful spokespeople on all and sundry matters.

HIV/AIDS vigilantes like to denigrate rethinkers for not having had their hands dirtied by direct research on the matters they discuss. Historians and sociologists of science, however, know that some of the most acclaimed breakthroughs were made by disciplinary outsiders, who were not blinkered and blinded by the contemporary paradigm (24, 25).

Self-styled “skeptics” (26) like to denigrate heterodox views as “pseudo-science” just because those views are heterodox, ignorant of the fact that there are no general criteria available by which to judge whether something is “scientific”; and they tend to be also ignorant of the fact that “scientific” cannot be translated as “true” (2, 27, 28).

Most relevant to the question of the “truth” of scientific knowledge is that science and scientists tend to occupy something of a pedestal of high prestige in contemporary society; perhaps because when we think of “science” we also tend to think “Einstein” and other such celebrated innovators. But nowadays there are a great many run-of-the-mill scientists, and even considerably incompetent ones: “Science, like the military, has its hordes of privates and non-coms, as well as its few heroes (from all ranks) and its few field marshals” (29)—which serves to explain, perhaps, some of the examples of sheer incompetence displayed in HIV/AIDS matters (30). Pertinent here is the fact that much medical research is carried out by people trained as doctors; training for physicians’ work is by no means training for research.


Those are some of the ways in which the commonplace conventional wisdom is wrong about science, but there are plenty more (24, 25, 32, 33). Those misconceptions play an important role in the hold that HIV/AIDS theory continues to have on practitioners, commentators, and observers, and they need to be pointed out in answer to the natural question often put to rethinkers: “But how could everyone be so wrong for so long?”

That’s why Part II of my book (31) has the title, “Lessons from History”, with chapters on “Missteps in modern medical science”, “How science progresses”, and “Research cartels and knowledge monopolies”. (About research cartels and knowledge monopolies, see also 34, 35). I’m enormously grateful to Virginia Tobiassen, the fine editor who helped me with the book, not least for the opportunity to augment the technical Part I with this Part II and the Part III that recounts the specific details of how HIV/AIDS theory went so wrong.

I’ve come to understand a great deal more since the book came out, among other things that perhaps the crucial turn on the wrong path came when Peter Duesberg’s rigorously researched and documented argument against HIV/AIDS theory went without comment, even in face of an editorial footnote promising such a response (36). Just as virologists ignored Duesberg’s substantive critiques, so epidemiologists ignored the informed critiques by Gordon Stewart (37) and immunologists ignored the fully documented questions raised by Robert Root-Bernstein (38); and just about everyone in mainstream fields ignored John Lauritsen’s insights into data analysis and his insider’s knowledge of interactions among gay men (39).

Peer review in HIV/AIDS “science” lapsed fatally from the beginning and has not yet recovered. Thus the only real safeguard of reliability was lost, it sometimes seems irretrievably.

1. “Are chemists not scientists?”—p. 19 ff. in reference 2.
2. Henry H. Bauer, Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, 1992.
3. —— , A consumer’s guide to science punditry, Chapter 2 in Science Today: Problem or Crisis?, ed. R. Levinson & J. Thomas, Routledge, 1997.
4. —— , Two kinds of knowledge: maps and stories, Journal of Scientific Exploration 9 (1995) 257-75.
5. —— , The anti-science phenomenon in science studies, Science Studies 9 (1996) 34-49; .
6 —— , Disciplines as cultures, Social Epistemology 4 (1990) 215-27.
7. —— , Barriers against interdisciplinarity: Implications for studies of Science, Technology, and Society (STS), Science, Technology, & Human Values 15 (1990) 105-19.
8. Chapters 11, 14, 15 (in particular) in reference 9.
9. Henry H. Bauer, Fatal Attractions: The Troubles with Science, Paraview, 2001.
10. Chapters 15, 16 in Henry H. Bauer (as ‘Josef Martin’), To Rise above Principle: The Memoirs of an Unreconstructed Dean, University of Illinois Press.
11. Chapters 4, 5 in reference 9.
12. Chapter 5 in reference 2.
13. Andrew Stark, Conflict of Interest in American Public Life, Harvard University Press, 2000.
14. Joel Kauffman, Malignant Medical Myths: Why Medical Treatment Causes 200,000 Deaths in the USA each Year, and How to Protect Yourself, Infinity Publishing, 2006.
15. John Abramson, Overdosed America: The Broken Promise of American Medicine, HarperCollins, 2004.
16. Marcia Angell, The Truth about the Drug Companies: How They Deceive Us and What To Do about It, Random House, 2004.
17. Jerry Avorn, Powerful Medicines: The Benefits, Risks, and Costs of Prescription Drugs, Knopf, 2004.
18. Merrill Goozner, The $800 Million Pill: The Truth behind the Cost of New Drugs, University of California Press, 2004.
19. Jerome Kassirer, On the Take: How Medicine’s Complicity with Big Business Can Endanger Your Health, Oxford University Press, 2004.
20. Sheldon Krimsky, Science in the Private Interest, Rowman and Littlefield, 2003.
21. David Willman, Los Angeles Times, 7 December 2003: “Stealth merger: Drug companies and government medical research”, p. A1; “Richard C. Eastman: A federal researcher who defended a client’s lethal drug”, p. A32; “John I. Gallin: A clinic chief’s desire to ‘learn about industry’”, p. A33; “Ronald N. Germain: A federal lab leader who made $1.4 million on the side”, p. A34; “Jeffrey M. Trent: A government accolade from a paid consultant”, p. A35; “Jeffrey Schlom: A cancer expert who aided studies using a drug wanted by a client”, p. A35.
22. Henry H. Bauer, “The fault lies in their stars, and not in them — when distinguished scientists lapse into pseudo-science”, Center for the Study of Science in Society, Virginia Tech, 8 February 1996; “The myth of the scientific method”, 3rd Annual Josephine L. Hopkins Foundation Workshop for Science Journalists, Cornell University, 26 June 1996.
23. Chapters 9, 10 in reference 9.
24. Ernest B. Hook (ed.), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002.
25. Henry H. Bauer, The progress of science and implications for science studies and for science policy, Perspectives on Science 11 (#2, 2003) 236-78.
26. The mother of all “skeptical” groups is CSICOP, publisher of Skeptical Inquirer; see George P. Hansen, “CSICOP and the Skeptics: an overview”, Journal of the American Society for Psychical Research, 86 (#1, 1992) 19-63.
27. Chapters 1-3, 6, 7 in reference 9.
28. Henry H. Bauer, Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other Heterodoxies, University of Illinois Press, 2001.
29. “Science as an institution”, pp. 303-6 in Henry H. Bauer, Beyond Velikovsky: The History of a Public Controversy, University of Illinois Press, 1984.
30. Pp. 110, 192, 195 in reference 31.
31. Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland, 2007.
32. Chapters 1, 4, 6, 7 in reference 2.
33. Chapter 12 in reference 9.
34. Chapter 13 in reference 9.
35. Henry H. Bauer, Science in the 21st century: knowledge monopolies and research cartels, Journal of Scientific Exploration 18 (2004) 643-60.
36. Peter H. Duesberg, Retroviruses as carcinogens and pathogens: expectations and reality, Cancer Research 47 (1987) 1199–220; Human immunodeficiency virus and acquired immunodeficiency syndrome: correlation but not causation, Proceedings of the National Academy of Sciences, 86 (1989) 755–64.
37. Gordon T. Stewart, A paradigm under pressure: HIV-AIDS model owes popularity to wide-spread censorship. Index on Censorship (UK) 3 (1999).
38. Robert Root-Bernstein, Rethinking AIDS—The Tragic Cost of Premature Consensus, Free Press, 1993.
39. John Lauritsen, The AIDS War: Propaganda, Profiteering and Genocide from the Medical-Industrial Complex, 1993, ASKLEPIOS. ISBN 0–943742–08–0.

Posted in experts, HIV does not cause AIDS, HIV skepticism | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 3 Comments »

%d bloggers like this: