After months of debate on the incorporation of research culture into the Research Excellence Framework (REF), last week saw the announcement of key details the sector has been waiting for.
Research England and its counterparts have launched a project to develop indicators for the new “People, Culture and Environment” (PCE) element – led by Technopolis and CRAC-Vitae – alongside a pilot exercise to test out PCE with a sample of institutions.
These plans will revive questions you have likely heard already: how do you define “research culture”? It’s such an abstract concept, so how can you measure it objectively? Is culture even what we should be evaluating? And if it is, is REF the place for it?
Replace the word “culture” in these questions with “impact” and you will have transported yourself back to 2009.
As HEFCE explored the possibility of measuring impact in the REF, resistance was widespread. Assessing research based on its impact represented – to some – an unwelcome shift in values, one that favoured STEM-focused research and devalued “blue skies thinking”. There was no agreed definition of impact and questions abounded as to whether and how it could be fairly measured and evaluated.
Fast forward to today and research impact has become established in REF, while remaining an area full of questions, debates and challenges – many prompted by the attention brought to the field via its inclusion in REF. What can we learn from the evolution of the impact agenda for the establishment of PCE in REF, and for the broader research culture agenda?
The establishment of impact
The challenge facing HEFCE and its counterparts was to clearly define the idea of impact in a way that was owned by the research community. They did this by actively co-creating REF impact measures with the sector over a number of years. This involved wide engagement across all relevant stakeholders via consultations and workshops in 2008-09 and a pilot of the Impact Case Study in 2010. REF guidelines were then published in 2011 ahead of the deadline for submissions in late 2013.
Impact was also introduced over two REF cycles – its weighting gradually increasing from 20 per cent in REF 2014 to 25 per cent in REF 2021. This iterative, responsive, gradual approach over time was key to building confidence and credibility, concepts central to the development of all REF requirements.
At its core the REF is a peer review process, with panels drawn from academic communities across the UK. The discipline-specific knowledge and scrutiny provided by REF panels have been key to the status of impact in the REF. Academics must be confident their case studies will be judged fairly, and the sector must see this process as credible in order to validate REF’s aims of benchmarking and funding distribution.
Impact in practice
While peer review brings confidence and credibility, it also involves subjectivity and interpretation. The criteria of “reach” and “significance” require unpacking, while the proposed addition of “rigour” in REF 2029 contributes to a series of contested terms that will always confuse and confound. Impact is both an accepted and perpetually challenged part of the REF.
The history of impact within the REF has not been a straightforward process of theorising, consulting, and implementing, but rather a story of creating imperfect criteria that are imperfectly adapted to disciplinary needs. This is the inevitable result of the ambitious aim of a nationwide exercise to evaluate research consistently and fairly.
Lessons for research culture
The funding bodies’ proposal – based on the findings of the Future Research Assessment Programme – to introduce a PCE element must articulate and refine the notion of “research culture” in the context of REF through a similar collaborative process to find a pragmatic solution.
Concretising what is meant by “culture” into meaningful, specific areas that relate to research – as opposed to broader institutional environments – and account for disciplinary variation will be key.
A recent report commissioned by UKRI to evaluate UK research culture initiatives sets out a research culture framework that, according to the report, has received “considerable interest” as a potential “tool to support strategic planning on research culture … including for the Research Excellence Framework”. Given that Vitae is both one of the report’s authors and is co-leading the PCE indicators project, it seems likely this framework will feature, and perhaps evolve, as part of the indicators project’s consultation process.
The challenge will be to develop reliable measures of PCE components that carry the confidence of the sector and maintain the credibility of the exercise. A core tension is that the process of measurement risks narrowing the expectation of what a “good” research environment looks like. There will be a trade-off between a) consistency and comparability and b) recognition of diversity across institutions and UOAs.
The “how” of research
Measurement of what “good” looks like can also neglect to consider how those good outcomes were generated.
In their short summary of complex stories, Impact Case Studies to date have focused on the “what” of impact (the outcome). Proposed changes to criteria for REF 2029, including a focus on team-based and diverse impacts, now move Impact Case Studies closer to the “how” of impact (the process). Learning about the varied mechanisms that underpin impact creation can enable us to support, share and celebrate them.
PCE provides an opportunity to enrich this understanding of how to create positive research outcomes. Its broad scope covers the pipeline from research inputs (e.g. funding) and initiatives to outputs (e.g. open research) and outcomes (e.g. trained staff). This opens up the possibility for disciplines and institutions to grow their own definitions of excellence and share the “how” of that development.
It is not desirable to have a poor culture that leads to excellent outcomes, nor a healthy culture that does not. Much debate has focused on the balance between process and outcomes in the REF. PCE has the potential to improve our understanding of the relationship between process and outcomes, so that we can optimise process in a way that maximises outcomes. Going one (meta) step further, a more overt focus on the underlying conditions that create research environments can inform sectoral approaches to creating environments conducive to excellence. Taking these opportunities will help to ensure that this shift towards “how” is a positive one for the sector.
We can also use the first iteration of PCE as a platform to build towards a future refinement that creates a wealth of useful knowledge that benefits us all. A complex database of creative and innovative initiatives that disciplines have implemented to create excellent research environments and outcomes could be a boon to the sector.
REF realism
Despite the tensions between institutions’ preferences for REF’s design and the practical limitations on its implementation, REF nonetheless advances important research agendas. It has contributed to the infrastructure of support for impact and opens conversations for partnership and career development. When it is a hindrance or too narrow to be helpful, we use the attention afforded by the REF to broaden the conversation beyond its criteria.
Similarly, despite the inevitable imperfection of PCE’s implementation in REF 2029, the focus and funding afforded to these aspects of the research system will generate insights and momentum that have the potential to transform the way we do research for the better.