There is no objectively right weighting for the people, culture and environment, element of REF 2028.
One of the difficulties in alighting on the right weighting is that measuring people, culture and environment, is different to measuring impacts and outputs. People, culture and environment is an input. It is possible, albeit statistically unlikely, to be a great place to work and produce less than stellar research. Equally, poor workplaces can produce good research. And to add further complication everyone has a different view of what constitutes a good work environment or good work culture.
The metrics underpinning the people, culture and environment strand will therefore have to come down to what can be reasonably measured and the weighting will come down to what can command the confidence of the sector as a whole. It is both a gut feeling and a statistical test. Or as Executive Chair of Research England, Jessica Corner, wrote in a recent blog
While we are confident that a set of reliable indicators can be developed, we may reflect on the relative weighting of the People, Culture and Environment element, depending on the evidence from the work to develop indicators.
Developing a good research culture is not only a responsibility universities have to their employees but to the research sector as a whole. Every current or future academic that decides to do something else with their lives due to the culture and environment of their workplace is a loss of potential, a loss to the university sector, and a loss to the global research ecosystem.
This means that Research England has to toe a careful line between recognising institutional and discipline performance while incentivising a more sustainable research system through improving research cultures.
Whose culture is it anyway
The case that poor cultures lead to poor research is well established if not entirely uncontested. Wellcome’s work What researchers think about the culture they work in recognises that a pressure to do work which is easily measurable can lead to lower-quality work and superficial outputs. This is within a context that a number of researchers reported sacrificing their own wellbeing to maintain research quality.
It is however also true that it is difficult to draw a straight line between somewhere having a good culture and good research outputs. Elizabeth Gadd wrote in Wonkhe that
Research culture is a hygiene factor. We need to set the standard below which we must not fall, rather than making research culture the next big competition in Higher Education. It’s about stemming the loss: the loss of good people (through lack of diversity, poor leadership, toxic behaviours, lack of career paths, recognition, and reward) and the loss of quality (through questionable research practices, closed and irreproducible research), and not a short-cut to gain.
To expand this argument further it seems unlikely that if a robust and responsible set of metrics can be developed that putting more emphasis on people, culture and environment will lead to worse research overall. There is of course a risk of “gaming” but this is also true of any number of assessment exercises in higher education.
On metrics and measurement
Jessica Corner is right to highlight that confidence in indicators should go hand in hand with the overall weighting of each element of REF.
Research culture can be measured through a set of proxy measures. There is a wealth of HESA data on who is working in HE, innumerable staff surveys, and lots of work on research robustness. The expectation is that providers will also produce their own metrics and insights addressing their own internal shortcomings. All of this is up for consultation but it is unlikely that universities would want a single measure imposed upon them.
It therefore might not be possible to measure culture in a uniform way but this is a different question as to whether Research England believe they can assess a variety of inputs in a consistent way.
Consistency is key. A shared belief that everyone is being held to the same standard and working under the same rules is the first test before any question of whether weighting should be 10, 20, 25 per cent, or even higher. It is my own view that a weighting between 15 to 25 per cent feels about right as large enough to put new emphasis on a longstanding issue without totally changing an exercise that has commanded the confidence of the sector.
Whether the decision to introduce new metrics or alter weightings is right will only become more apparent in retrospect. Depending on what you believe to be the point of REF one success measure could be outputs and impacts remain strong across the sector despite a greater emphasis on culture. This may suggest that the elements of REF are mutually reinforcing, not mutually exclusive. One measure might be evidence that environments and cultures are improving.
A key measure will be more of a research by vibes approach. Fundamentally, whether in retrospect the sector has confidence that the exercise was robust, fair, and worthwhile.