In my contemplation of the purity of scientific pursuit, I came across an article by Frank Furedi (Science’s peer system needs a review, The Australian, 20 February 2010) that highlights a myth that is commonly known to researchers and central to the mysticism of science. The myth: That the esoteric peer review system is neither impartial nor independent, and is also, often a major hindrance to research that can really expand disciplines.

Furedi highlights a range of problems including:

  1. Scientists can use the editorial process to slow down the publication of views that counter their own.
  2. The review process stifles innovative methodologies outside the commonly accepted paradigms for that discipline.
  3. Rivals are often not best placed to critique the work of others.
  4. The peer review is often a ‘mates club’ of mutually accepted publications among journal editors and friends.
  5. Advocacy science often leads to the publication of articles based on perceived societal impact, not scientific merit.
  6. Peer review creates a standard that stops free debate through claims that anything that is not peer-reviewed is not valuable.

This short article accompanies a growing body of work questioning the robustness of the scientific industry:

Charlton, G. (2009). Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity. Medical Hypotheses, 72(3), 237-243.

Horrobin, D.F. (2001). Something Rotten at the Core of Science? Trends in Pharmacological Sciences, 22(2), 51-52.

This should not be interpreted to mean that peer review should be discarded. Lack of peer review can lead to damning consequences. An example is of the editor of Medical Hypotheses—an oddity in the world of scientific publishing because it does not practice peer review—who lost his job over the publication of a paper that said that HIV does not cause AIDS. Bruce Charlton, who succeeded the founder of the journal, David Horrobin (yes, one in the same!) in 2003, decided what got published on his own—although he occasionally consulted with another scientist—and manuscripts were only very lightly edited.

The point is that peer review is not a stamp of credibility. Nor is it inherently good for science or guarantees that science will be good. The quality of science, or rather scientific work, is far more dependent on the quality of the logic and the simplicity and elegance of the supporting evidence (statistical or not). With this in mind, I draw your attention to a reference to a recent short piece by Christopher Peterson and Nansook Park in The Psychologist, May 2010, 23(5):

Abstract

A special issue of Perspectives on Psychological Science, published by the American Psychological Society, invited opinions from a variety of psychologists, including us (Diener, 2009). Our advice was to keep it simple (Peterson, 2009). We offered this advice not because simplicity is a virtue, although it is (Comte-Sponville, 2001). Rather, the evidence of history is clear that the research studies with the greatest impact in psychology are breathtakingly simple in terms of the questions posed, the methods and designs used, the statistics brought to bear on the data, and the take-home messages.

Simplicity, logic and empirical data are the foundation of quality science, not peer acceptance.