Reading Amanda Solloway on the REF
David Kernohan is Deputy Editor of Wonkhe
Tags
If you take Michelle Donelan’s “bureaucracy” kick as an attempt to operationalise the letters page of the old THES circa 1998, action on REF (or RAE, as it was then) was clearly the other shoe that was waiting to drop.
I was expecting something akin to Gordon Brown’s metrics-led shakeup in 2008, without the consciousness that such approaches have been tried and found wanting for anything other than supplementary information in a handful of science units of assessment. What we got today was something less certain, but much more interesting.
Amanda Solloway – who has thus far had the same impact on science policy as Jason Newsted had on Metallica – has been listening to some academic critiques of REF. The framework is a complex and much-mythologised beast, to the extent that many commonly held beliefs about REF are absolute nonsense.
Researchers tell me they feel pressure to publish in particular venues in order to gain the respect of their peers, which wrongly suggests that where you publish something is more important than what you say. That just can’t be right.”
It is not right. I’ve said this so often that I’m tired of saying it but, here goes.
REF doesn’t care where you publish
REF doesn’t care where you publish. Your choice of journal matters not one iota to the REF. Journal Impact Factors are not a thing. Whatever “citation impact” may be, it does not play a part in REF. REF is a system based on the peer review of academic outputs – doesn’t matter what the outputs are, or where they are published. For the biggest chunk of the assessment process (glossing over the “impact” and “environment” measures) It only matters what other academics make of it. Nobody cares about your h-index either, put it away, boys.
Amanda Solloway has fallen a long way down the REF rabbit hole:
People talk to me about “REF-able publications” – a total distortion of the value of research and a constraint on the diversity of research objectives.”
There is no limit to what is “REF-able” – excepting the very reasonable expectation that research funded by the public is readable by the public for the most part (even here, there are numerous get-out clauses). You can return a symphony to REF, or a public event, or a dataset. Whatever works. Academics do tend to like articles and monographs as these are good ways of conveying complex information accurately and quickly – once you have learned a few basic rules reading an academic paper is not a complex endeavor.
Men made of straw
Far be it from me to offer wider political critique, but people in positions of power inventing a pretend bad thing (or riffing of a bad thing someone else has invented) for political gain has been a feature of recent political life in the anglosphere. As delighted as I am that these lines suggest that we’re not going to get Gordon Brown’s fully automated luxury RAE any time soon, I’d rather keep research policy conversations among the “fact-based” community.
The meat of the announcement could have been contained in a single tweet:
I have today written to Research England to ask them to start working with their counterparts in the devolved administrations on a plan for reforming the REF after the current exercise is complete.”
It is customary to review REF/RAE after each iteration – and new ideas are taken on board as the system is tweaked – note for example the impact of Stern on REF2021. I was pleased to note that the Forum for Responsible Metrics had cut through, as had wider international initiatives on research assessment. The way in which some providers use the mythologised REF to promote anxiety and competition is long overdue reform.
There is a consultation coming. Based on the speech it is difficult to see much more than tweaks around the edges being on the cards. But stranger things have happened. I’d wait for the letter.
Where you publish is not *supposed* to matter for REF, but peer review is not perfect and a lot of reviewers are heavily influenced by the impact factor of the journal the work appears in. And reviewers do take into account the number of citations a paper has received, which is heavily linked to the journal, irrespective of the quality of the work.
Yes. The evidence would show that – on balance – journal impact factor correlates positively to better REF grades. But, that doesn’t necessarily mean that it’s due to reviewers making assumptions based on place of publication before reading the output. Might it be that outputs which have been rigorously reviewed through more robust and extensive peer review processes to be get published in e.g. Nature are just better outputs in the way that REF sees them, i.e. the academic status quo?
I heard something different in her speech namely that the culture of research excellence in Universities, as represented by the power of the REF, was getting in the way of prioritising capacity building and delivering on the industrial strategy GDP target, levelling up and place agendas of government. What was also telling was she said nothing about the KEF.
The announcement reads like she has repeated some feedback that’s been received from researchers and conflated the problems around linking academic achievement and publication practices with the process of REF. REF could actually be an important influence in changing the culture around reputation and more generally research evaluation but to make those sort of changes you’d have to start listening to people who are involved in administering it as well as the researchers who are “victims” of the bureacracy.
I listened to the speech, too, not least because I am just revising my latest, and last, article on research quality assessment. Maybe that is why I heard things that you did not.
If researchers believe content excellence will be judged wherever it appears as the REF policy says, senior managers do not, and pressure for prestige publication, which may delay the appearance of important work. I try to convince colleagues and other younger researchers about
the stated policy, but they are sceptical.
I welcomed her hints that there are a diversity of excellences, and, not in the transcript, that globally focused work is not the be all and end all and certainly not a requirement for 4* quality, with local and regional impact are essential.
She was right that the exercise encourages a low-risk culture of conformity and compliance with the established canon, when work on Clark’s ‘development periphery’ challenges received wisdom as research should do but, in many cases, is assessed by those whose received wisdom is being challenged – look at the balance of membership of panels ans the membership of the Stern Committee, and see Sharp and Coleman (2005) on links between membership of panels and grades awarded by the panels to staff in members’ institutions.
The exercises have pushed to limit diversity of output – Jon Adams’ work records the shifts to the dominance of journal articles as the way to publish, even in creative arts, despite policy being not to distort activity.
Amanda Solloway has committed to a wide input to the review, as was done in Australia, when Stern was limited to one submission per institution. She shows a willingness to learn from elsewhere, too, when hitherto there has been evangelism to copy the UK model, rejected by the Bourke Report in 1997 on the basis of my review of impact for HEFCE and never pursued since. Perhaps the Netherlands model of assessing teaching and research together – the Humboldtian harmony thesis – and of visiting institutions rather than relying on self-promoting paperwork, and of not linking outcomes to funding offers a possibility. So do Australia and New Zealand in promoting the development of research – quality and quantity – and newer researchers. That would make a difference, and justify 25 years of work speaking truth to power!