NHMRC Public Consultations

Skip Navigation and go to Content
Visit NHMRC website

Public Consultation on the NHMRC Draft Principles of Peer Review submission

ID: 
4
This submission reflects the views of
Organisation Name: 
Faculty of Medicine, University of New South Wales
Please identify the best term to describe the Organisation: 
Educational Institution – tertiary
Personal Details
General Comments
Comments: 

The document 'Draft Principles of Peer Review' focuses almost solely on issues in relation to due process. Unfortunately, having defined peer review as "the impartial and independent assessment of research by others" the document then completely ignores issues around the methods used for assessment.

If NHMRC wishes to have its peer review respected, it must focus on the quality of assessment, not just on process. The quality of an assessment method is reflected in its reliability and validity. Remarkably, neither of these terms is used anywhere in the document.

To clarify, reliability of an assessment refers to the extent to which assessments are consistent, while validity refers to whether the assessment is accurate. The 'Draft Principles' document does state that assessment needs to be "accurate and honest" but offers no indication about how this is supposed to be achieved.

A clear demonstration of the lack of reliability of the current approach to peer review of Project Grant applications is that there are numerous documented instances of researchers having enhanced, updated and resubmitted applications initially judged to be close to the cutoff for funding, only to receive scores lower than the original. When applications that previously received near-miss scores are subsequently rejected out of hand as non-competitive and "not for further consideration", it is clear that something is seriously wrong.

In fact there are at least three major flaws in the current approach to organisation of Grant Review Panels. Firstly, because panels are comprised of volunteer members who are assigned a huge workload, almost no one reads all the applications allocated to their panel. Thus the opinions of the two spokespersons largely determine the outcome of an application, yet it is not uncommon for a spokesperson to be unfamiliar with the specific area of research. Secondly, because the composition of each panel changes from year to year as NHMRC seeks to allow as many researchers as possible to participate, many spokespersons are inexperienced. Thirdly, not only is a revised and improved application very likely to go to different spokespersons the following year, but there is no reference to any earlier version of an application.

Collectively, these processes virtually guarantee that within each panel, the assessment will not be reliable.

Another serious issue is how scoring data are used. The current system requires all panel members to make qualitative judgements about scientific quality, significance & innovation and the track record of the investigators, for which they provide category scores for each application. These are discrete ordinal data which by definition indicate relative position, but on which one cannot perform meaningful arithmetic. Yet they are then treated as if they were continuous numerical data for the purposes of ranking! The pseudo-precision of reporting averaged scores to 3 decimal places would be almost comical if it was not being (mis)used for comparison between panels in such a high stakes assessment.

This process essentially ensures that the overall assessment will not be valid.

Indeed, the methods currently in use for assessment of Project Grants would not meet standards for design of assessments in high school or in undergraduate university degrees.

A set of possible approaches to addressing these problems is outlined below.

  1. Use a limited number of assessment panels comprised of senior (ex-)investigators employed (i.e. paid appropriately) by NHMRC for a minimum of 5 years, to provide high quality assessements and continuity of assessment/feedback to applicants. Have these panels work through the year.
  2. Provide at least 3 cycles per year of submission/review, to distribute the workload and expedite the process. Consider a 2-stage submission process with an initial Expression of Interest and a subsequent full application, because this could be a mechanism to eliminate non-competitive proposals early on, reducing the review burden of the panels.
  3. Ensure that submitted applications receive real feedback (not just a meaningless score). Do not allow immediate resubmission (thus if cycles were 4 months apart, one could not resubmit in the next cycle, but only in the one after that). Ensure that a resubmitted application has at least 1 of the original 2 spokespersons allocated to be spokesperson again.
  4. Eliminate external assessments, because they are currently given hardly any credence by GRPs anyway, other than to provide points of criticism, and assessments are both inconsistent and very inefficient. This would also eliminate rebuttals, which are similarly inefficient  and given even less credence by GRPs.
  5. Going beyond the assessment process itself, consider pre-commitment by NHMRC of a specified percentage of funds to basic science/clinical investigation/large-scale clinicial trials/public health/health services research, so that committees rank within each area and there is no attempt to compare things that cannot be compared.

Consistent with the commitment to continuous improvement in the 'Draft Principles' document, when the current approach to assessment is revised, the reliability and validity of the new system should be evaluated on an ongoing basis.

Page reviewed: 19 February, 2013