The best bit to read in any league table publication is the methodology section.
The Good University Guide (GUG), out this morning in The Sunday Times (and available since Friday online), sets out in some detail how the rankings and scores are derived from public data. This allows the data-curious higher education observer to reconstruct tables (or even, if you are so inclined, plot them in advance).
Here’s how that GUG ranking compares to last year – note in particular that there are comparatively few large shifts in placings for individual providers, especially towards the top end of the table.
Getting to the numbers
As you’d expect, the majority of the data underpinning the GUG ranking comes from HESA – with a leavening from the 2021 REF results and the National Student Survey. This pattern is similar for the other two well known UK tables – the Complete University Guide (CUG), and the Guardian University Guide (Guardian) – but there are important differences in emphasis and weighting.
For example, the Guardian table does not cover research performance at all but is the only one of the three to include any aspect of widening participation performance (the value added score). In contrast CUG puts a greater emphasis on provider spending in key areas, whereas a higher proportion of the overall score comes from the national student survey in CUG.
Little earthquakes
As I’ve noted before these tables are all designed to produce small earthquakes – enough movement to allow us to read narratives into the scores but without so much movement the scores start to seem arbitrary. As most tables tweak their methodology each year (this year has seen a round of compensations for unusual data as a result of the Covid-19 pandemic) we do have to see narrative construction, or hierarchy maintenance to put it another way, as a key goal.
A great case study here is the Complete University Guide (CUG) who wonderfully make 14 years of historic data available to let you see the sweep of change (or not) over a longer period). Have some universities just got “better” over that time? Have some got “worse”? Why have most stayed the same in relation to each other?
The ur-hierarchy that each table hints at is not one that is ever recorded, but it is very widely understood. Someone should do some serious social sciences research asking members of the public to rank every UK university (I’d do a pair ranking exercise) and look for commonalities – if you happen to run a major polling company with the resources to run a representative n=1000 sample every year this would allow you to gazump every other league table in existence.
Individual universities will often target ranking improvements in a single league table as a key performance measure. Though the very fact of this is depressing (not as depressing as the aim of one provider to be seen as a “middle-ranked Russell Group university”) it makes a kind of twisted sense if the priorities of the league table compiler match the priorities of the senior management team.
A nationalised league table?
But what’s striking thinking about this in 2022-23 is how regulation (in England) at least has been influenced by league table processes.
If you look at the Office for Students B3 conditions you can trace a familiar pattern given the use of continuation, progression, and completion metrics. Access and mobility measures, disaggregated to the very limits of statistical significance, are used in monitoring and developing plans, whereas the NSS crops up in the new TEF. We also see the proportions of students achieving a good degree crop up in more ad hoc monitoring and investigations.
The data underlying the range of dashboards that will begin being populated next month could very easily be reconfigured as a single state-backed league table. Even before we compiled it, the policy decisions would be fascinating. The government would have to come up with a single, aspirational, definition of what makes a good or desirable provider of higher education. It would have to choose between a balance covering responsiveness to employer needs, research power, teaching excellence, student experience, and the nebulous question of what makes a “traditional” university.
The late, lamented, Unistats hinted a little in this direction, but do we really want a state-backed league table? Such a decision could have global repercussions, and would thus be a huge intervention in the market. It would affect the flow of billions of pounds of fee and collaboration revenue. But it would place problematic values and assumptions in the foreground, rather than allow them to influence policies behind the scenes.
Fundamentally, league tables are (and indeed, regulation is) a great way to measure a provider you are interested in against criteria you already understand. We need to emphasise that latter part. And some thought as to which metrics link to matters that are actually under the direct control of a given provider would be welcome.
Imagine we had a yearly “fruit league table”. Comparing apples and oranges and 100 other fruits. Sounds preposterous? Indeed.
It would be really boring. Avocados would be top every single year. Even producing little earthquakes in the methodology would not budge them off top spot.