Sunday, May 26, 2013

Peer review follies

Abstract

A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

Gated version of the full paper here.

Hat tip: John Bound

2 comments:

  1. Any thoughts on how different the findings would be if we tried replicating this in Economics? The fact that editors and referees were generally unable to detect the duplicate nature of the submissions is surprising to me (given my own experience of how I see professors connecting work presented at seminars to other related papers in the field) but the eventual outcome (in terms of editors recommending 8/9 rejects for these submissions) is not.

    ReplyDelete
  2. The problem with this study is that there are two things going on. The first is that the journal is taking a new draw from the distribution of referees. As everyone who has been a part of the journal process knows, referees do not always agree, though my experience is that they are definitely positively correlated. One could, with some assumptions, try to back out a correlation from the data in this study. The other thing that is going on is that they are changing the authors to people the referees will not have heard of at places they will not have heard of. That is quite a different treatment than just taking a new draw. I suspect that is where most of the treatment effect is coming from.

    ReplyDelete