Harnessing the Metric Tide: indicators, infrastructure, and priorities for responsible research assessment, or The Metric Tide Revisited as it will surely be colloquially known, is a superb piece of work.
The rapid review has taken place through a handful of expert panels, coupled with a literature review, and supported by advice from international experts.
Ostensibly, this is an exercise to lead a brief review of the role of metrics in research management and assessment. It deals with these issues, as discussed in detail on Wonkhe, but it also lures us into considering whether research incentives, funding, and accountability mechanisms are fit for purpose.
Buried within section two is a cheat sheet on the eight cycles of UK research assessment since 1986. As the authors rightly point out while the history of research assessment is new in terms of the history of higher education it has a lineage that precedes the inventions of the world wide web.
You affect the world by what you browse
Clearly, the methodologies underpinning research assessment have evolved to a post internet world. Although there are issues with the consistency and interoperability of systems and data the driving theme of the paper is on the appropriate use and limitations of data. The biggest threat to the benefits of REF are that for expediency, cost, or frankly a belief from the sector that more data equates to fairer results, that metrics gain further primacy and qualitative assessment is phased out over time.
The reason for qualitative analysis is that through measures like environmental statements it is possible to place providers within a wider context of their performance and reward them for the environment for supporting research and enabling impact. This work is assessed through narrative evidence on the environment along with data on research income, income in kind, and the completion of doctoral degrees. For the first time in 2021 HEIs also submitted institutional level environment statements to inform, and contextualise submissions.
Therefore, although methodologies for assessment are consistent there is an acknowledgement that the operating environment of HEIs differ. This is contextualised by both awarding criteria and through the piloting of contextual environment statements.
The report sets out that the purpose of the REF is for the allocation of QR funding. Since REF 2017 an additional purpose has been to give accountability for public investment through demonstrating the benefits of research; and to provide benchmarks and what the report refers to as “reputational yardsticks”. In addition, the Stern Review proposed three further objectives; to inform strategic decisions about national priorities; create performance incentives; and to support universities to inform resource allocation and research investment. The report notes that it is beyond the scope of this review to consider a reformulation of the purpose of the REF; but it is not beyond the speculative wants of Wonkhe.
Apples and oranges
A key principle for a future REF should be whether it is more informative to use sector wide comparisons or cluster the sector into comparable institutions like KEF. There is a case to be made that while much has changed since 1986 the principle of comparability between providers has been prioritised through a uniformity of assessment. In tandem, the growth of the higher education system and the differing demands placed on research beyond intellectual enquiry for public benefit, as engines for economic growth, tools for social progress, enablers of business, lodestones for decisions on local and international strategies, and in many ways the core determinant of institutional success, means that we are not only asking research to fulfil a lot of functions but we are asking a research exercise to have something to say about each of these areas.
On the matter of incentives there is then a question on the benefit of a provider intentionally specialising in only a few research areas. For example, it may be harder for some providers to make the case for strategic specialisation when volume of activity and outputs are linked to funding formulae. It’s important not to translate sub-par research into local impact and there should be a high bar regardless of where research is used but I am struck by the work by Nesta that shows that even modest improvements in the research and innovation ecosystem around the foundational economy could make a fundamental difference to the social economy and geographical inequalities. Nothing less than the services we see, touch, and benefit from, everyday.
It is perhaps then time to deeply consider whether we have a single research ecosystem in the UK or multiple overlapping ecosystems that fulfill different but complementary purposes. The KEF’s approach to clustering institutions by type and by proxy purpose is an interesting model to explore. If we can acknowledge that research focus differs in different places through dint of geography, history, size, and staff composition, the there is also an inevitable difference in research focus, scale and place of impact, and the need or capacity to generate citation metrics, investment, or international breakthroughs. Foregrounding this dynamic might result in a very different research assessment exercise.
There would be significant issues to work through. In no particular order this would include; the categories of research institutions; whether these descriptions are allocated or self described; how to fund international excellence against local impact; whether KEF and REF could be merged in some way; whether it’s right for a research exercise to become a more significant part of policy delivery; and how assessment would work with both units of assessments and types of institutions under consideration.
The Metric Tide Revisited calls for a consideration of significant changes over the next two assessment cycles. Despite the challenges if the future is going to be radical any refresh of REF should look at the comparability between institutions, and whether there is sufficient flexibility within the current system to award different kinds of outputs, impacts, and environments.
Interesting and valuable suggestions about what might be learnt by REF from KEF, in terms of clustering and the subtlety of assessment that this would enable. It does beg the question of taking a similar perspective for TEF, given that the way TEF is run and its outcomes in particular have even less subtlety and sophistication than REF. A nuanced approach that learns the lessons from KEF would be welcome across all the Frameworks.