The fallacy of pre-publication peer review
Posted by Henry Bauer on 2011/05/17
An alternative title for this piece might be,
“The mainstream conspiracy of peer review”
Somehow it has become the conventional wisdom, within and without the scientific community, that the reliability and quality of science is safeguarded when grants are awarded only after vetting by established experts and research outcomes are published only after approval from established experts.
To the contrary: The important testing of scientific claims occurs only after publication of those claims, whereas pre-publication peer-review serves more effectively to censor truly original advances than to improve the quality of the research literature.
Those points of fact have been known, though, chiefly within history and sociology and philosophy of science and science & technology studies (STS). Few working scientists know anything of those fields, and they labor happily under such illusions as the misguided belief that there’s a universal “scientific method” that guarantees objectivity and reliability.
The barrier that peer review under mainstream auspices sets against truly innovative work is discovered typically by the individuals who find their ground-breaking advances scorned, censored, rejected, and then rediscovered only after perhaps a very long time, occasionally posthumously. (Stigler’s Law, an illustration of itself, holds that a discovery is named after the last person to discover it, not the first.) Sociologist Bernard Barber fifty years ago already described the “Resistance by scientists to scientific discovery” (Science, 134  596-602). Biologist Gunther Stent four decades ago coined the term “prematurity” to describe scientific breakthroughs that were too far ahead of the mainstream’s conventional views to be accepted (“Prematurity and uniqueness in scientific discovery”, Scientific American, December 1972, 84-93). It took further decades before even the STS communities focused in organized fashion on these insights (Ernest B. Hook (ed)., Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002).
Journal editors are in prime position to recognize the wet blanket of banally routine attitudes that peer review throws over original, counter-mainstream claims. Thus Richard Horton, editor of The Lancet, wrote:
“Peer review . . . is simply a way to collect opinions from experts in the field. Peer review tells us about the acceptability, not the credibility, of a new finding” (Health Wars: On the Global Front Lines of Modern Medicine, New York Review Books, 2003: 306).
A full discussion of these matters has been published by Richard Smith, former editor of the British Medical Journal (“Classical peer review: an empty gun”, Breast Cancer Research 2010, 12 [suppl. 4] S13). Smith points out that all the studies of the consequences and effects of peer review as normally practiced have found no evidence for its vaunted benefits:
— “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research”.
— Peer review does not prevent the publication of unimportant banalities that clutter up the literature and lower its quality: “Many studies are never cited once, most disappear within a few years, and very few have real, continuing importance”.
That has long been known, of course, to competent observers in STS, see for example J. R. & S. Cole, Social Stratification in Science , University of Chicago Press, 1973: 228; Henry W. Menard, Science: Growth and Change, Harvard University Press 1971: 99; Derek J. de Solla Price, Little Science, Big Science . . . And Beyond, Columbia University Press 1963/1986, Chapter 2).
It is not surprising, then, that John Ziman estimated that perhaps 90% of research articles in physics journals turn out to be erroneous in some way and thus not worth citing (Reliable Knowledge, Cambridge University Press, 1978, p. 40).
That much of the scientific community as well as science journalists and public pundits about science have remained ignorant of all this is illustrated by the brouhaha of astonishment that came when John Ioannidis showed that much of the medical literature is simply false [“Why most published research findings are false”, PLoS Med, 2005, 2:e124], often because “the standard of statistics in medical journals is very poor” [D. G. Altman, “Poor-quality medical research: what can journals do?” JAMA 287 (2002) 2765-7; “The scandal of poor medical research”, BMJ 308 (1994) 283-4]; so that “less than 1% of the studies in most journals” is “both scientifically sound and important for clinicians” [Haynes, “Where’s the meat in clinical journals?”, ACP Journal Club 119 (1993) A22-3]. Drummond Rennie, an editor of the Journal of the American Medical Association, remarked that “There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print”. As I pointed out recently, the purpose of publishing “research” articles is to pad vitae and lay the ground for getting more grants.
— Pre-publication peer review lacks benefit since it doesn’t ensure quality. It causes damage by censoring important work. Even with journeyman studies that add something potentially useful to the literature, the costs incurred by peer reviewing are not commensurate with any value added by the peer-review process.
— By contrast, peer review does prevent publication of work vindicated later, perhaps much later, as an important advance. Smith fails to cite Barber, however, who may have been the first to offer a host of specific illustrations. Smith does, however, point to the evidence that bias strongly influences reviewers’ opinions, and that abuses occur, not only the willful criticizing of those whose views are not the same as those of the reviewer but even such actual dishonesty as the misappropriation by reviewers of supposedly confidential material.
Smith sums it all up thus: “The problem with filtering before publishing, peer review, is that it is an ineffective, slow, expensive, biased, inefficient, anti-innovatory, and easily abused lottery: the important is just as likely to be filtered out as the unimportant. The sooner we can let the ‘real’ peer review of post-publication peer review get to work the better”.
Richard Smith cites David Horrobin’s critique of peer review, though he fails to mention Horrobin’s founding of Medical Hypotheses, the journal that practiced what Smith and Rennie and Horton preach — until ignorant administrators at Elsevier bowed to pressure from HIV/AIDS vigilantes (€L$€VI€R and the NEW “Medical Hypotheses”). Smith himself was editor of the short-lived Cases Journal (~2008-2010) whose rationale and practices were similar to those of Medical Hypotheses.
* * * * * * * *
The degree to which bias, self-interest and vested interests have corrupted science and medicine is illustrated by the fact that editors of leading journals write about the deficiencies of peer review but do not even try to change the system, despite the fact that they are in prime position to do so. Rather they actively collaborate, and entrench the system’s deficiencies: a group of Lancet editors ratified Elsevier’s censorship of Medical Hypotheses, and Horton’s Lancet has itself censored evidence-based critiques of HIV/AIDS theory by Gordon Stewart.
I have myself been editor of a peer-reviewed journal, and I understand the wide latitude that editors have in their choice of reviewers, in holding reviewers to standards of objectivity, and in bringing even counter-mainstream views to wider notice by publishing them together with reviewers’ demurrals. It isn’t necessary for editors of leading journals to just follow the implicit orders of the mainstream’s conventional wisdom; more shame to them for doing so even as they recognize that they shouldn’t. It’s possible to do better. I’ve found, for instance, that the Journal of American Physicians and Surgeons practices pre-publication peer review in a manner that is useful rather than burdensome: the editor demands that reviewers respond promptly, chooses alternatives when reviewers are tardy or unresponsive, and holds reviewers to evidence-based commentary that helps authors to improve their manuscripts.
However, the almost universal hegemony exerted by current counter-productive practices is illustrated by the fact that Richard Smith’s exposure of the fallacy of pre-publication peer-review was published in Breast Cancer Research rather than where it belongs, in Nature or Science or The Lancet or JAMA or the New England Journal of Medicine, since it is of concern to everyone involved in research and practice in science and medicine.
The hold that current corrupt practices have over academe and medicine and science is further illustrated by the avalanche of books by informed insiders denouncing the corruption — to no visible avail or effect. One is reminded of the continual expressions of horror at the corrupt state of intercollegiate athletics, expressions from the very people whose positions — as university presidents or as members of the Knight Commission — would seem to make it possible for them to actually do something about it. Instead, the most prominent critical voices are those of university presidents who are safely retired.
Richard Smith’s article was drawn to my attention by (no relation) Dave Smith, who has himself blogged about the problems with peer review and the piece by Richard Smith