This statement is a direct quotation from a 2006 article “Scientific Journals are ‘faith based’: is there science behind Peer review?” by a group of public health upstarts suggesting a lack of scientific method in the peer review process could be remedied by alternative methods of scholarly quality control and new forms of data-driven review. Predictably, a immediate respondent to this article questioned the authors’ research rigor, yet also sought common ground to “agree on the objectives of peer review and develop appropriate validated tools that can measure its effects.”
Oh, there is one other thing I noticed. The respondent admitted her competing interest as an employee of a journal publisher, hired to conduct research on peer review and publishing. Nice that we know it. In most scenarios of journal peer review, there is no ability to see or know if there would or had been a competing interest among a particular article’s peer reviewers. A few experiments with and discussion of open peer review have not led to a trend or shift in any large measure.
The most promising game-changer in bringing a scientific method to scholarly publishing today was probably the decision of PLoS ONE to offer two-stage peer review consisting of initial determination of scientific methodological soundness, then allowing the post-publication usage metrics to provide usage-data-driven evidence of article significance to a discipline. In their 2009 debut of the metrics, the PLOS blog explained:
First, we are focusing on articles rather than journals. The dominant paradigm for judging the worth of an article is to rely on the name and the impact factor of the journal in which the work is published. But it’s well known that there is a strong skew in the distribution of citations within a journal – typically, around 80% of the citations accrue to 20% of the articles. So the impact factor is a very poor predictor of how many citations any individual article will obtain, and in any case, journal editors and peer reviewers don’t always make the right decision. Indicators at the article level circumvent these limitations, allowing articles to be judged on their own scientific merits.
PLoS ONE’s decision to implement the scholarly “wisdom of crowds” has attracted both authors and readers less concerned with the journal brand and more concerned with immediate access, methodological credibility, and usage. Take for example a 2011 article in PLoS ONE: NSAID Use Selectively Increases the Risk of Non-Fatal Myocardial Infarction: A Systematic Review of Randomised Trials and Observational Studies. Here are the metrics for the article as of today:
The metrics show positive trend, times cited from multiple sources, and impressive usage statistics. SPARC will host Building New Measures for Impact: Article Level Metrics, a webcast of April 12th that will focus on information on article level metrics, with Peter Binfield. Yes, I know what you are thinking. His competing interest is that he is merely the publisher of PLoS ONE. At least we know it and can take it into account.