The findings of the Independent Review of the Role of Metrics in Research Assessment and Management conclude that “no metric can currently provide a like-for-like replacement for REF peer review.” The report goes on to say, “Peer review is not perfect, but it is the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals, and for national assessment exercises like the REF.”
“It is not currently feasible to assess research outputs or impacts in the REF using quantitative indicators alone” the Independent Review told the four UK funding bodies that manage the Research Excellence Framework (REF).
The analysis from the review, covered 149,670 individual outputs and “found only weak correlations between REF scores and individual metrics, significantly lower correlations for more recently published works, and highly variable coverage of metrics across subject areas”.
There was also ‘considerable scepticism’ among researchers, universities and learned societies about the broader use of metrics in research assessment and management, uncovered in over 150 responses to the review’s call for evidence. “Concerns include the ‘gaming’ of particular indicators, uneven coverage across individual disciplines, and effects on equality and diversity across the research system.”
The review was chaired by James Wilsdon, professor of science and democracy at the University of Sussex and supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and research administration. It’s findings are based on 15 months of evidence-gathering and consultation undertaken by HEFCE with data provided by Elsevier; “including the most comprehensive analysis to date of the correlation between REF scores at the paper-by-author level and a set of 15 bibliometrics and altmetrics.”
The review’s corresponding report ‘The Metric Tide’ focuses on the potential uses and limitations of research metrics and indicators, “exploring the use of metrics within institutions and across disciplines.”
A correlation analysis of the REF2014 at output-by-author level has shown that individual metrics give significantly different outcomes from the REF peer review process. “Publication year was a significant factor in the calculation of correlation with REF scores, with all but two metrics showing significant decreases in correlation for more recent outputs.”
Other key findings from the review as follows:
- Peer review, despite its flaws, continues to command widespread support as the primary basis for evaluating research outputs, proposals and individuals. However, a significant minority are enthusiastic about greater use of metrics in these contexts, if appropriate care is exercised and data infrastructures improved.
- Carefully selected indicators can complement decision-making, but a ‘variable geometry’ of expert judgement, quantitative indicators and qualitative measures that respect research diversity will be required. Greater clarity is needed about which indicators are most useful for specific disciplines, and why.
- Inappropriate indicators create perverse incentives. There is legitimate concern that some indicators can be misused or ‘gamed’: journal impact factors, university rankings and citation counts being three prominent examples.
- The data infrastructure that underpins the use of metrics and information about research remains fragmented, with insufficient interoperability between systems. Common data standards and transparent processes are needed to increase the robustness and trustworthiness of metrics.
- In assessing impact in the REF, as with outputs, it is not currently feasible to use quantitative indicators in place of narrative case studies, as it may narrow the definition of impact in response to the availability of certain indicators. However, there is scope to enhance the use of data in assessing research environments, provided data are sufficiently contextualised.
The report also sets out 20 recommendations for further work and action by stakeholders across the UK research system. These recommendations propose action in for various recipients in the following areas: supporting the effective leadership, governance and management of research cultures; improving the data infrastructure that supports research information management; increasing the usefulness of existing data and information sources; using metrics in the next REF; and coordinating activity and building evidence.
The report makes specific recommendations for HEFCE and other HE funding bodies using metrics in the next REF including: assessing output; assessing impact; and assessing the research environment.
As part of the recommendations,the report suggests the establishment of a ‘Forum for Responsible Metrics’ “which would bring together research funders, HEIs and their representative bodies, publishers, data providers and others to work on issues of data standards, interoperability, openness and transparency.”
Professor James Wilsdon said, “The metric tide is rising. But we have the opportunity – and through this report, a serious body of evidence – to influence how it washes through higher education and research. We are setting out a framework for responsible metrics, which I hope research funders, university leaders, publishers and others can now endorse and carry forward.”
David Sweeney, Director of Research, Education and Knowledge Exchange, HEFCE, said, “This review provides a comprehensive and soundly reasoned analysis of the current and future role of metrics in research assessment and management, and should be warmly welcomed. The findings and recommendations of this review are clearly far-reaching, with implications for a wide range of stakeholders, including research funders, governments, higher education institutions, publishers and researchers.”
You can read the report in full here.