The CS Monitor had an interesting article on peer review.
My view is that peer review is a highly effective filter. But one should not expect too much from it. While it stops most errors from being published, it cannot catch every problem. Reviewers occasionally fail to notice an obvious mistake, and there are some types of error that reviewers usually cannot catch. They cannot tell if the author misread observations of an instrument, or wrote a number down wrong, or if chemical samples used in an experiment were contaminated. Moreover, peer review often cannot identify clever fraud, such as the rare cases where the scientific work being reported was not really done at all.
But peer review is only the first of many levels of testing and quality control applied to scientific claims. When an important or novel claim is published in a journal, other scientists test the result by trying to replicate it, often using different data sets, experimental designs, or analytic techniques. While one scientist might make a mistake, do a sloppy experiment, or misinterpret their results (and peer reviewers might fail to catch it), it is unlikely that several independent groups will make the same mistake. Consequently, as other scientists repeat an observation, or examine a question using different approaches and get the same answer, the community increasingly comes to accept the claim as correct.
Peer review is also important for evaluating proposals to funding agencies as well as for things like tenure and promotion. It seems difficult to imagine how an alternative system for those things would work. I suspect that changes in the peer review system will eventually occur as our ways of communicating changes, but those changes will be slow.
No comments:
Post a Comment