HIV/AIDS Skepticism

Pointing to evidence that HIV is not the necessary and sufficient cause of AIDS

Posts Tagged ‘Andrew Stark’

Institutionalizing conflicts of interest

Posted by Henry Bauer on 2008/12/02

A fellow scientist of my generation likes to describe us as “dinosaurs”, and periodically accuses me of naivety if I slip into suggesting that facts win out in the end or that scientific ideals and traditional ethics have not been completely abandoned.

I guess it’s true that I’ve written and continue to write as though there are people out there who share my disbelief at, for example, the brushing aside of conflicts of interest as only “apparent” (see “Consequences of misconduct in science”).  And there ARE people who share my attitude, call them naïve and unrealistic if you wish: there’s Sheldon Krimsky, Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research? (Rowman & Littlefield, 2003); there’s Andrew Stark, Conflict of Interest in American Public Life (Harvard, 2000); there are Centers for Ethics, and periodicals devoted to ethics in research. Plenty of academics are aware of the sad fact that science and medicine, both research and patient care, have been pervasively infiltrated in a way that might even be called corrupting.

Researchers and administrators of research, however, seem oblivious. Well into the 1970s and even the 1980s, universities were at least trying to apply some brakes. We had to make formal application if we consulted more than half a day per week, and if our remuneration exceeded some modest amount. We were not permitted to run a business that was in any way connected with our academic responsibilities. We took for granted the burden of offering our professional advice as to the publishability of manuscripts or the qualifications of candidates for jobs or promotions. When we traveled to present invited seminars or to advise academic institutions, we didn’t expect honoraria in addition to having our expenses covered — and we felt unusually appreciated when we received honoraria equivalent to a few hours of our annual salary. We regarded it as exceptional perks — comparing ourselves to so many other people- — that our university salaries were paid on a 9-month or 10-month basis, permitting us to teach r do research for an extra 20% or so of annual remuneration. I recall being shocked, in the early 1980s, when professors of English were asking remuneration for reading book manuscripts of candidates for tenure.

What a different world it is, just a couple of decades later. A misguided Director of the National Institutes of Health dropped certain restrictions on outside income, with predictably disgusting consequences (David Willman, Los Angeles Times, 7 December 2003: “Stealth merger: Drug companies and government medical research”, p. A1; “Richard C. Eastman: A federal researcher who defended a client’s lethal drug”, p. A32; “John I. Gallin: A clinic chief’s desire to ‘learn about industry’”, p. A33; “Ronald N. Germain: A federal lab leader who made $1.4 million on the side”, p. A34; “Jeffrey M. Trent: A government accolade from a paid consultant”, p. A35; “Jeffrey Schlom: A cancer expert who aided studies using a drug wanted by a client”, p. A35.)

Just as with political lobbying, we Americans seem able to euphemize, ignore, and even defend practices that in other lands we would be quick to recognize as plain corruption. What set off this tirade was a news item in the Chronicle of Higher Education, 20 June 2008, p. 13: “To lure top scientists, NIH raises pay for some peer reviewers”, by Jeffrey Brainard. Here are a few extracts:

“The National Institutes of Health plans a major increase in the money to provides to long-serving peer reviewers . . . . Some will receive $250,000 for six years . . . . Under the current terms of $200 per day, such scientists would net only about $6000 after six years”.
[Peanuts! Coffee money! But, after all, this is in addition to their salaries wherever they happen to be working, their pay isn’t cut just because they’re away from the office or the lab. And that $200 per day is in addition to expenses, of course, for travel, food, and accommodation; expenses that can be and are often padded a little.]

“But the largesse . . . . would benefit only a few hundred of the several thousand scientists who help evaluate grants of the institutes. . . . Traditionally, many scientists have willingly reviewed applications, though the fees they have been paid fell well short of the value of the time commitment required: at the NIH, 40 to 80 hours of preparation for each day-and-a-half meeting” — Right. I’ve known quite a few people who have served in this way. (Serving as  an academic dean teaches quite a lot about human nature.) Those who spent anything like that amount of preparatory time did it because of their sense of responsibility and don’t need extra money, while those who expect the money and will not otherwise serve will also not spend that amount of time on it.

“’In the end, peer review is only as good as the quality of the people doing it,’ said Elias A. Zerhouni, the NIH’s director”.
Yes, indeed. We need honest, conscientious people who do these things because their profession is a vocation, a calling, not just a way to earn a living, and certainly not a way to acquire wealth.
[Zerhouni continued,] “I think you get what you pay for”.
And there you have it.
— Want medical care? The more you pay, the better care you’ll get. But didn’t we used to think that was a dreadful situation, when behind the Iron Curtain one had to give bribes and tips to get proper care?
— Want education for yourself or your children? The more you can pay, the better education they will get. But isn’t there some sort of consensus still that every American child should get every educational opportunity they can benefit from?
— Want honest evaluation of research? You’d better pay for it, especially to people who don’t need the money because they earn so much already.
It reminds me of the philosopher, I don’t recall whether it was Mort Sahl or Bob Newhart or Tom Lehrer, certainly one of their ilk, about responding to question from students: “And, of course, if you raise my pay, I’ll even give them correct answers”.

But it’s not all gravy, we’re told. “The $250,000 compensation [lovely choice of word] will be awarded as an ‘administrative supplement’ to existing research grants”, so the recipients can use it at will: “They will keep only some of the money, as salary — the underlying grants also typically finance research equipment and laboratory assistants”.

And of course this administrative supplement is in addition to the $200 daily honoraria.

“NIH leaders rejected, though, a controversial proposal by a peer-review task force that would have capped at five the number of research grants that any one scientist could hold, in order to spread dollars among more grant applicants, including younger ones”.
[An earlier piece in the Chronicle had mentioned that scientists are on average 42 years of age before they get their first NIH grant. Got to keep those young Turks in their place, kowtowing as “postdoctoral fellows” to us experienced gurus; otherwise, who could we get to actually do the work in our labs?]

Robert Merton, founding sociologist of science, long ago identified the “Matthew Effect”:

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.
—Matthew 25:29, King James Version.

It’s not new, in science, it’s just become as egregious as Credit Default Swaps and other scams. I usually resist the notion that there exists a self-interested, self-serving Establishment, be it in government or in education or in research. But facts are stubborn things, as they say, and sometimes my naivety bows to them. Dr. Zerhouni and the other “NIH leaders” have certainly provided us with some very stubborn, unpalatable facts.

Posted in experts, Funds for HIV/AIDS, uncritical media | Tagged: , , , , , , , , , , , , | Leave a Comment »

Science Studies 101: Why is HIV/AIDS “science” so unreliable?

Posted by Henry Bauer on 2008/07/18

Recent comments and e-mails reminded me of my career change, about 3 decades ago, from chemist to science-studies scholar. I had begun to wonder: What is it exactly that has made science so strikingly reliable?

(This is a long post. If you prefer to read it as a pdf—of course without hyperlinks to some of the on-line references—here it is: sciencestudies101).

Over the years, teaching chemistry and publishing research in electrochemistry, I had become increasingly aware that research practices and practitioners differ significantly from the ideal images that had attracted me (1). My education, like that of most scientists, had been strictly technical: chemistry, physics, math, biology, statistics. Recreational reading had added some history of chemistry, which also focused on the technical aspects—progress, discoveries, breakthroughs. We were not exposed to history, philosophy, or sociology of science in any meaningful way; nor are most people who study science even nowadays.

Mid-20th century, that lack of exposure to the context and significance of scientific activity was partly a matter of Zeitgeist, I recognize in hindsight. Philosophy of science was rather in flux. History of science as a whole was not so different in approach from the history of chemistry I had read—and perhaps not so different from how history in general was being taught: as milestones of achievement made by great individuals (largely, of course, men). Sociology of science had been founded only in the late 1930s. It was the 1960s before historians of science and philosophers of science began to engage seriously with one another, an engagement illustrated by Thomas Kuhn’s “The Structure of Scientific Revolutions”. Sociologists of science, too, began to engage with the historians and philosophers of science.

Following World War II, some scientists and engineers were looking for ways to make their knowledge an effective influence in public policy. Emblematic of this quest was the Bulletin of the Atomic Scientists. Starting about 1960, there were founded a variety of free-standing academic courses, a few research centers, and some organized academic programs under the rubric of “science and society”. These science-based ventures and the history-philosophy-based ones soon recognized each other as concerned with the same issues, yet even after a half-century, no truly integrated multi-disciplinary approach to understanding scientific activity has matured into an overall consensus (3). There persists a distinct internal division between those whose backgrounds are in the practice of science and technology and those whose backgrounds are in the humanities and social sciences (3, 4, 5). But despite differences over modes of interpretation and what is most worth looking into, there has accumulated a body of agreed facts about scientific activity. Most important for the present purpose is that many of those facts about science are at variance with commonplace conventional wisdom. Misconceptions about scientific activity are pervasive, not least among practicing scientists and medical practitioners.

I was lucky enough to participate in the early days of one of the first programs in the world in what has become known as “science and technology studies” (STS). At Virginia Tech, we began with physicists and chemists, economists and sociologists, mathematicians, statisticians, political scientists, and other as well, telling one another how we thought about science. We scientists learned to be less sure that our research reveals unchanging, objective, universal facts about the real world. The humanists and social scientists learned that the physical and biological sciences uncover facts about the real world that are more trustworthy than the knowledge accessible in such disciplines as sociology. We learned also how different are the viewpoints and intellectual values to which we are schooled in the various disciplines: in a sense, the differences are not so much intellectual as cultural ones (6,7, 8). I learned even more about such cultural differences between academic fields through having responsibility for the variety of disciplines embraced by a college of Arts & Sciences (10).

A salient fact is that “the scientific method” is more myth than reality (2, 11). What makes science relatively reliable is not any protocol or procedure that an individual scientist can follow, it is the interaction among practitioners as they critique one another’s claims, seek to build on them, and modify them, under constraints imposed by the concrete results of observations and experiments. Because individual biases predispose us to interpret the results of those observations and experiments in congenial ways, the chief safeguard of relative objectivity and reliability is honest, substantive peer-review by colleagues and competitors. That’s why I was grateful to “Fulano de Tal” when he pointed to errors in one of my posts: we rethinkers do not have the benefit of the organized peer-reviewing that is available—ideally speaking—in mainstream discourse [see Acknowledgment in More HIV/AIDS GIGO (garbage in and out): “HIV” and risk of death, 12 July 2008].

Because proper peer-review is so vital, conflicts of interest can be ruinously damaging (12, 13). Recommendations from the Food and Drug Administration or the Centers for Disease Control and Prevention are too often worthless—worse, they are sometimes positively dangerous (14)—because in latter days the advisory panels are being filled overwhelmingly with consultants for drug companies. That’s not generally enough appreciated, despite a large and authoritative literature on the subject (15-20).

Lacking familiarity with the findings of science studies, scientists are likely to be disastrous as administrators. It was a Nobel-Prize winner who relaxed the rules on conflicts of interest when he headed the National Institutes of Health, with quite predictably deplorable consequences (21). There have been many fine administrators of technical enterprises, but few had been themselves groundbreaking discoverers. To convince the scientific community of something that’s remarkable and novel, a scientist must be single-minded, captivated by the idea and willing to push it to the limit, against all demurrers—very bad qualities in an administrator; the latter ought to be a good listener, an adept engineer of compromises, an adroit manager able to stick to principles with an iron hand well masked by a velvet glove.

Those who have the egotism and dogmatic self-confidence to break new ground also need luck to be on their side, for—as Jack (I. J.) Good likes to point out—geniuses are cranks who happen to be right, and cranks are geniuses who happen to be wrong: in personal characteristics they are identical twins (22, 23). This role of luck has important implications: it’s why Nobel-Prize winners so rarely have comparable repeat successes, and why they should not be automatically regarded as the most insightful spokespeople on all and sundry matters.

HIV/AIDS vigilantes like to denigrate rethinkers for not having had their hands dirtied by direct research on the matters they discuss. Historians and sociologists of science, however, know that some of the most acclaimed breakthroughs were made by disciplinary outsiders, who were not blinkered and blinded by the contemporary paradigm (24, 25).

Self-styled “skeptics” (26) like to denigrate heterodox views as “pseudo-science” just because those views are heterodox, ignorant of the fact that there are no general criteria available by which to judge whether something is “scientific”; and they tend to be also ignorant of the fact that “scientific” cannot be translated as “true” (2, 27, 28).

Most relevant to the question of the “truth” of scientific knowledge is that science and scientists tend to occupy something of a pedestal of high prestige in contemporary society; perhaps because when we think of “science” we also tend to think “Einstein” and other such celebrated innovators. But nowadays there are a great many run-of-the-mill scientists, and even considerably incompetent ones: “Science, like the military, has its hordes of privates and non-coms, as well as its few heroes (from all ranks) and its few field marshals” (29)—which serves to explain, perhaps, some of the examples of sheer incompetence displayed in HIV/AIDS matters (30). Pertinent here is the fact that much medical research is carried out by people trained as doctors; training for physicians’ work is by no means training for research.

——————-

Those are some of the ways in which the commonplace conventional wisdom is wrong about science, but there are plenty more (24, 25, 32, 33). Those misconceptions play an important role in the hold that HIV/AIDS theory continues to have on practitioners, commentators, and observers, and they need to be pointed out in answer to the natural question often put to rethinkers: “But how could everyone be so wrong for so long?”

That’s why Part II of my book (31) has the title, “Lessons from History”, with chapters on “Missteps in modern medical science”, “How science progresses”, and “Research cartels and knowledge monopolies”. (About research cartels and knowledge monopolies, see also 34, 35). I’m enormously grateful to Virginia Tobiassen, the fine editor who helped me with the book, not least for the opportunity to augment the technical Part I with this Part II and the Part III that recounts the specific details of how HIV/AIDS theory went so wrong.

I’ve come to understand a great deal more since the book came out, among other things that perhaps the crucial turn on the wrong path came when Peter Duesberg’s rigorously researched and documented argument against HIV/AIDS theory went without comment, even in face of an editorial footnote promising such a response (36). Just as virologists ignored Duesberg’s substantive critiques, so epidemiologists ignored the informed critiques by Gordon Stewart (37) and immunologists ignored the fully documented questions raised by Robert Root-Bernstein (38); and just about everyone in mainstream fields ignored John Lauritsen’s insights into data analysis and his insider’s knowledge of interactions among gay men (39).

Peer review in HIV/AIDS “science” lapsed fatally from the beginning and has not yet recovered. Thus the only real safeguard of reliability was lost, it sometimes seems irretrievably.

References:
1. “Are chemists not scientists?”—p. 19 ff. in reference 2.
2. Henry H. Bauer, Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, 1992.
3. —— , A consumer’s guide to science punditry, Chapter 2 in Science Today: Problem or Crisis?, ed. R. Levinson & J. Thomas, Routledge, 1997.
4. —— , Two kinds of knowledge: maps and stories, Journal of Scientific Exploration 9 (1995) 257-75.
5. —— , The anti-science phenomenon in science studies, Science Studies 9 (1996) 34-49; .
6 —— , Disciplines as cultures, Social Epistemology 4 (1990) 215-27.
7. —— , Barriers against interdisciplinarity: Implications for studies of Science, Technology, and Society (STS), Science, Technology, & Human Values 15 (1990) 105-19.
8. Chapters 11, 14, 15 (in particular) in reference 9.
9. Henry H. Bauer, Fatal Attractions: The Troubles with Science, Paraview, 2001.
10. Chapters 15, 16 in Henry H. Bauer (as ‘Josef Martin’), To Rise above Principle: The Memoirs of an Unreconstructed Dean, University of Illinois Press.
11. Chapters 4, 5 in reference 9.
12. Chapter 5 in reference 2.
13. Andrew Stark, Conflict of Interest in American Public Life, Harvard University Press, 2000.
14. Joel Kauffman, Malignant Medical Myths: Why Medical Treatment Causes 200,000 Deaths in the USA each Year, and How to Protect Yourself, Infinity Publishing, 2006.
15. John Abramson, Overdosed America: The Broken Promise of American Medicine, HarperCollins, 2004.
16. Marcia Angell, The Truth about the Drug Companies: How They Deceive Us and What To Do about It, Random House, 2004.
17. Jerry Avorn, Powerful Medicines: The Benefits, Risks, and Costs of Prescription Drugs, Knopf, 2004.
18. Merrill Goozner, The $800 Million Pill: The Truth behind the Cost of New Drugs, University of California Press, 2004.
19. Jerome Kassirer, On the Take: How Medicine’s Complicity with Big Business Can Endanger Your Health, Oxford University Press, 2004.
20. Sheldon Krimsky, Science in the Private Interest, Rowman and Littlefield, 2003.
21. David Willman, Los Angeles Times, 7 December 2003: “Stealth merger: Drug companies and government medical research”, p. A1; “Richard C. Eastman: A federal researcher who defended a client’s lethal drug”, p. A32; “John I. Gallin: A clinic chief’s desire to ‘learn about industry’”, p. A33; “Ronald N. Germain: A federal lab leader who made $1.4 million on the side”, p. A34; “Jeffrey M. Trent: A government accolade from a paid consultant”, p. A35; “Jeffrey Schlom: A cancer expert who aided studies using a drug wanted by a client”, p. A35.
22. Henry H. Bauer, “The fault lies in their stars, and not in them — when distinguished scientists lapse into pseudo-science”, Center for the Study of Science in Society, Virginia Tech, 8 February 1996; “The myth of the scientific method”, 3rd Annual Josephine L. Hopkins Foundation Workshop for Science Journalists, Cornell University, 26 June 1996.
23. Chapters 9, 10 in reference 9.
24. Ernest B. Hook (ed.), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002.
25. Henry H. Bauer, The progress of science and implications for science studies and for science policy, Perspectives on Science 11 (#2, 2003) 236-78.
26. The mother of all “skeptical” groups is CSICOP, publisher of Skeptical Inquirer; see George P. Hansen, “CSICOP and the Skeptics: an overview”, Journal of the American Society for Psychical Research, 86 (#1, 1992) 19-63.
27. Chapters 1-3, 6, 7 in reference 9.
28. Henry H. Bauer, Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other Heterodoxies, University of Illinois Press, 2001.
29. “Science as an institution”, pp. 303-6 in Henry H. Bauer, Beyond Velikovsky: The History of a Public Controversy, University of Illinois Press, 1984.
30. Pp. 110, 192, 195 in reference 31.
31. Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland, 2007.
32. Chapters 1, 4, 6, 7 in reference 2.
33. Chapter 12 in reference 9.
34. Chapter 13 in reference 9.
35. Henry H. Bauer, Science in the 21st century: knowledge monopolies and research cartels, Journal of Scientific Exploration 18 (2004) 643-60.
36. Peter H. Duesberg, Retroviruses as carcinogens and pathogens: expectations and reality, Cancer Research 47 (1987) 1199–220; Human immunodeficiency virus and acquired immunodeficiency syndrome: correlation but not causation, Proceedings of the National Academy of Sciences, 86 (1989) 755–64.
37. Gordon T. Stewart, A paradigm under pressure: HIV-AIDS model owes popularity to wide-spread censorship. Index on Censorship (UK) 3 (1999).
38. Robert Root-Bernstein, Rethinking AIDS—The Tragic Cost of Premature Consensus, Free Press, 1993.
39. John Lauritsen, The AIDS War: Propaganda, Profiteering and Genocide from the Medical-Industrial Complex, 1993, ASKLEPIOS. ISBN 0–943742–08–0.

Posted in experts, HIV does not cause AIDS, HIV skepticism | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 3 Comments »