Today Research England has published the first iteration of the Knowledge Exchange Framework (KEF).
It includes data and narrative information about the knowledge exchange activities of English higher education providers (HEPs) as a set of interactive dashboards co-developed with Jisc. This publication marks the end of a process that has involved some rewarding consultation and piloting work with the sector, but also some challenging choices and Covid-related disruption. And in some ways today marks the proper start of the KEF.
So now seems a good time to reflect on that design process, but also on today’s publication and what we’re planning next.
Overarching considerations
Back in the heady days of 2019 when we didn’t have to remember to unmute ourselves before talking to each other, the KEF pilot exercise bought together a group of institutions to explore the proposed design and metrics of the exercise. I found these workshops very rewarding, with the events yielding some great discussions that led to quite a few tweaks. Our findings were published in the decisions for this first iteration in January last year.
Throughout the design process, the main principles we’ve tried to keep in mind were the need for fair comparison of a very diverse sector, and the need to make the KEF as low burden an exercise as possible while still providing some useful information.
For a fair comparison, it was clear early on that the diversity of the sector and the types of knowledge exchange (KE) activity they undertake meant that some sort of simple sector-wide league table with a single score would be of limited use. We’ve tried to address this using clusters (which group institutions with similar characteristics together), and normalisation to account for differences in size. We’ve also tried to capture a range of types of knowledge exchange (termed “perspectives” in the KEF), which range from commercialisation and working with businesses, to how providers engage with their public and communities, how they provide skills training, and the roles they play in local growth and regeneration.
These perspectives each have some metrics behind them, and the results for each provider are assigned a decile from one to 10 and expressed alongside the cluster average decile. It is important to emphasise that the cluster averages are not ‘benchmarks’ and we don’t expect providers to meet or exceed them all. And indeed, you’ll see from the results that most providers are above the average in some perspectives, and below in others. This is a normal result and reflects the various strengths of providers, as well as their missions and strategic focus. It is also important to note that whilst deciles are calculated in relation to all providers, it is the result relative to the provider’s cluster average that is important.
On burden, the data used for the KEF metrics comes entirely from existing returns (mostly the HESA HE-BCI survey) or other existing data, so there has been no additional data collection burden. But for some types of KE, there was a feeling that existing data did not give an adequate picture on its own. I think this was particularly true for the KEF perspective of Local Growth & Regeneration. The role of universities in their local areas, the civic agenda and government focus on levelling up has gained prominence in recent years, but I’m not sure we have a well-developed idea of the range of activities undertaken, or what good looks like. This will be a critical part of our work going forward, taking account of new Government priorities and insights.
Challenging choices
We therefore asked providers to submit two narrative statements describing their role in local growth & regeneration, and in public & community engagement. The narrative templates provided were designed to encourage structured, evidence-based accounts of these areas. Whilst the submission of narratives was optional, we saw a great response, with most eligible institutions opting to submit narratives to contextualise their results.
Of all the changes made as a result of the consultation and pilot exercises, possibly the most substantial of these was to move to a provisional self-assessment exercise for public and community engagement, the results of which were used to derive the score for this perspective. We worked with the National Coordinating Centre for Public Engagement (NCCPE) on this and are grateful for their support.
But we appreciate that this is not an easy thing to do. We tried to make the guidance as clear as possible, but there will always be a degree of interpretation and subjectivity in such an assessment. We also appreciate the difficulties of deciding on an appropriate score without the opportunity to “calibrate” across the sector. I was reassured to see a range of self-assessment scores, but again together with NCCPE and as part of the wider KEF review we will be looking very closely at how providers have evidenced their scores, and whether there are options for evolving and improving this aspect.
While we do not underestimate the effort required to craft a really good narrative, or to collate relevant information from across the organisation (especially during a global pandemic!) we do feel the additional burden of doing so is justified by the result – There are many rich descriptions of strategies, activities and results, as well as thousands of links to further information.
The narratives are an integral part of the KEF and I’d strongly encourage you to read them alongside the data. I was impressed at the range and volume of things that all types of institutions are doing in their local areas and we will certainly be undertaking further analysis of these narratives. We hope others will do the same.
What now?
We are keen to emphasise that this is the first iteration of the KEF, and we have always been clear that we are doing our best to create something useful with the data and information we have available to us. We’ve tried to make decisions on the data, narratives and how they are all presented as interactive dashboards in a transparent and fair way. And we do not claim that the KEF as a definitive, complete representation of KE performance: there clearly will be opportunities to change, evolve and improve aspects of it.
To this end we have already started a review of this first iteration of the KEF, which will continue over the summer and be published in the autumn. As well as being able to provide immediate feedback via a short survey embedded in the KEF website (which we very much encourage you to do), we will be engaging with the sector in a structured way at various points in the review. It will cover all aspects of the KEF, from the detail of the metric calculations, to how the results are presented, and what changes are happening as a result – and ultimately how well it is fulfilling its stated aims. We’re also particularly keen to look at how we might develop new ways of capturing types of knowledge exchange not well represented in the data at present, such as public policy engagement. And as always, we will do this in a collaborative, open way.
But for now, we’ve been encouraged by the positive engagement from the sector during the development process, and anecdotal evidence of the KEF driving a renewed focus on knowledge exchange. We very much hope this continues as it develops.