Peering at peer review revealed high degree of chance associated with funding of grant applications

J Clin Epidemiol. 2006 Aug;59(8):842-8. doi: 10.1016/j.jclinepi.2005.12.007. Epub 2006 Mar 27.

Abstract

Background and objectives: There is a persistent degree of uncertainty and dissatisfaction with the peer review process underlining the need to validate the current grant awarding procedures. This study compared the CLassic Structured Scientific In-depth two reviewer critique (CLASSIC) with an all panel members' independent ranking method (RANKING). Eleven reviewers, reviewed 32 applications for a pilot project competition at a major university medical center.

Results: The degree of agreement between the two methods was poor (kappa = 0.36). The top rated project in each stream would have failed the funding cutoff with a frequency of 9 and 35%, depending on which pair of reviewers had been selected. Four of the top 10 projects identified by RANKING had a greater than 50% of not being funded by the CLASSIC ranking. Ten reviewers provided optimal consistency for the RANKING method.

Conclusions: This study found that there is a considerable amount of chance associated with funding decisions under the traditional method of assigning the grant to two main reviewers. We recommend using the all reviewer ranking procedure to arrive at decisions about grant applications as this removes the impact of extreme reviews.

MeSH terms

  • Canada
  • Humans
  • Observer Variation
  • Peer Review, Research / methods
  • Peer Review, Research / standards*
  • Research Support as Topic*