As Research England prepares to consult on the development of the forthcoming Knowledge Exchange Framework (KEF) later this month, I’ve found that one of the more challenging aspects to explain well are the clusters of “KE peer groups” we are proposing to create.
This is because the English university sector is so diverse that comparing everyone to everyone just doesn’t make sense for the KEF.
So, how have we actually gone about this?
Clusters: what, why, how?
Creating clusters of universities is nothing new, with similar work dating back to the 1970s.
One of the best-known examples of clustering is the Carnegie classification of US universities, first created in 1973. As Tomas Coates-Ulrichsen notes in his new technical report published today, this classification was “created in response to a realisation by the Carnegie Foundation … that there was no classification system … that differentiated institutions along the key dimensions that were important to its work and that this limited their ability to make appropriate recommendations on the major issues facing the sector.”
HEFCE used clusters to inform evaluations of its Higher Education Innovation Funding (HEIF) where research intensity was used to understand how HEIF was spent in different types of institution.
One of the aims of the KEF is to provide universities with new ways of understanding and improving their performance in knowledge exchange. So our motivation is basically the same as Carnegie’s – doing something to increase our ability to understand and draw useful conclusions, in this case, from knowledge exchange data. Clusters are therefore just another lens through which we can look at something, providing the opportunity to draw out similarities and differences, and make more meaningful comparisons.
Careful comparisons
But how do we decide which characteristics might create a useful lens for our purposes? One approach might be to gather as much data as we can about universities, put it all in a big virtual bucket, deploy some statistical techniques that measure similarity, and see what emerges. But this is time-consuming and the results might not allow us to say anything useful about differences in knowledge exchange performance which, after all, is what the KEF is trying to achieve.
Alternatively, we could pick characteristics we think represent the differences we are trying to uncover, or that will help us to make a particular point about something. But then we run the risk of just reinforcing existing ways of thinking.
Or we could do a bit of both. It should be no surprise that as a funder distributing in excess of £2bn a year to English universities (including £210m for knowledge exchange) and administering the Research Excellence Framework, Research England likes to spend a good deal of time thinking about the effectiveness of our funding. Our approach to KEF clusters has therefore been to build an evidence-based conceptual understanding for what we’re trying to achieve that drives the selection of characteristics we are using to create the clusters.
This is covered in more detail in chapter two of the new technical report, but essentially we are proposing clustering universities based on their assets and capabilities to do KE.
Institutional mission diversity
A simple example of what we mean is that if a university doesn’t do any research, evidence shows that its ability to create intellectual property that can be commercialised is much more limited than an institution doing large volumes of research in STEM disciplines.
So, if we are clustering on what one could call the inputs to KE, the KEF then becomes about measuring how effectively a university is able to translate these into outputs: how effectively is a university at translating its assets and capabilities into making a difference to society and the economy via knowledge exchange activities?
There are a whole set of other things that have been considered, which are covered in detail in the technical report and which I’d urge you to read. These include things like the role of a university’s size and ensuring the characteristics we chose aren’t correlated with each other.
Importantly, Tomas Coates Ulrichsen of Cambridge University notes:
By focusing on structural characteristics of HEIs rather than KE performance, the approach deliberately avoids making any value judgement that one group is somehow ‘better’ than another; rather it identifies groups that are structurally different from each other.”
But to be clear, the proposed KEF clusters and proposal to compare universities to the average of their cluster is not intended as something that will let certain clusters off the hook or allow universities to explain away poor performance.
In fact, they do the opposite. Comparing everyone to everyone in one large group makes it harder to distinguish between each member, especially when there are large variations across the group. Small but important differences get lost. Therefore, as the analogy of a lens implies, clusters also allow us to magnify differences and pick out finer details, allowing better differentiation of performance within the cluster group.
How you can help
As part of the KEF consultation due to be launched soon, we’ll be asking for feedback on not just the output of the clustering process but also on our methods and choice of variables for the KEF. Is this approach a good idea? Do you recognise the other universities in your cluster as knowledge exchange peers or not? Do you think we’ve missed something important or just don’t think clusters work for the KEF?
It is also important to note that this report is the output of the modelling and do not necessarily represent the final design for KEF. For example, in the conclusions, Ulrichsen notes some challenges that
The STEM and social science & business clusters of HEIs have very few members (with nine and four members respectively) … Research England will need to reflect on how to fairly treat these specialist institutions alongside the much larger number of broad-discipline HEIs.
We accept this observation and will use the consultation period to work on this with the individual institutions in these clusters.
Today, we have also published a summary of the responses to the call for evidence we ran earlier this year, and a note on our approach to selecting metrics and why we’re not proposing to use some UKRI data.
A top tip for influencing policy here though: if you hate it, that’s fine, you can tell us and we won’t be offended, but it’s always tremendously useful to hear your ideas on what we should do instead.
I am heartened by the comment that focusing on structural characteristics rather than KE performance avoids making any value judgement. I know we are strong Knowledge Exchange deliverers at Wolverhampton and our approach works because it is deliberate to meet the needs of our constituents / stakeholders / economic landscape. Identifying one group as somehow ‘better’ to another would be like saying apples were better than pears and irrelevant to what is trying to be achieved.