Which is the best university? It’s a seductive question to ask, but that doesn’t mean there’s a sensible answer. League tables, aka rankings, is the nonsensical answer you’re likely to get.
They weigh the wrong factors – a very narrow idea of best, based on counting what’s measured rather than measuring what counts. Traditionally, this has led to a dominance of rankings by research-led institutions.
But even if the factors weighed were the right ones, the rankings use poor proxies to measure them – as if research citations, for example, were an unambiguous marker of quality, rather than being hugely dependent on publication in English, in the right journals and in the right disciplines.
But even if they were the right proxies, the data is often of poor quality: out of date, non-comparative, self-reported.
But even if the data were good, what rankers do with it isn’t: aggregating and weighting arbitrarily.
But even if the methodology were sound, the way the results are presented suggests an equal distance between say, first and thirty-first place as between fortieth and seventieth. Anyone who has ever seen a bell curve knows that is misrepresentation.
But even if league tables didn’t make all these mistakes and more, their worst crime is to imagine that there is such a thing as a single best university, rather than many different ways in which universities can be good at different things. Indeed, it is the very diversity of the higher education sector that is its strength. It means the sector as a whole can paint a rainbow of objectives catering to the divergent needs of particular students, communities, employers, economies and societies.
No platform for rankings
You can’t ban league tables, sadly. If we want information about higher education to be transparent, then there are those who will put it in a pop chart. That will attract attention, because offering an answer to that “best university” question is sexy.
The answer might not be to have fewer league tables, but instead to have more: an infinity of rankings so that each person can pick the one that combines just the factors they want, weighted perfectly to their needs. No ranking would be authoritative, because the array would reflect the personal and diverse nature of the question.
THE’s latest rankings product (its Global Impact Ranking) is a step in the direction of infinity in that it adds another league table to the shop window, incrementally diminishing the value of the ever-increasing heap.
However, perhaps we should welcome the desire to rate universities according to criteria such as recycling, fair labour practice and admissions policies, even if the process is as flawed as all the others? After all, the sexiness of rankings does shine a light on issues that might get overlooked (especially when the desire to do well in other rankings distracts universities from considering what else matters).
TEF: just another ranking?
That was explicitly the government’s intention when it introduced its own form of ranking – the Teaching Excellence Framework (TEF), which the then-minister Jo Johnson said would “introduce new incentives for universities to focus on teaching”. The idea was to rank universities’ teaching quality to get them to improve it and to drive student choice based on quality.
The problem is that TEF repeats the mistakes of other rankings. It weighs the wrong factors: the metrics (as was later acknowledged with the change of the name to include student outcomes) have little to do with teaching. It uses poor proxies, such as measuring employment not employability. The data is poor: the NSS component was downgraded after an NUS boycott undermined it. The methodology is arbitrary: for example, benchmarking by disciplines, but not regions.
The list goes on, but TEF is unlike other rankings in at least three respects. First, being the government’s own ranking, TEF bears more responsibility than most. It purports to be a truer truth – an authority that it hasn’t earned.
Second, most league tables – even though they are rarely entirely open about their methodology – do tend to stick to it. TEF, however, recognises the failings of its metric methodology and adds a subjective element: the review panel. It may be the best part of TEF, but it’s the least transparent and most susceptible to inconsistency.
Third, most league tables’ misrepresentation is a single hierarchical list. TEF retains the hierarchy, but shrinks distinctions to three categories: good (bronze), better (silver) and best (gold). This, of course, creates a cliff edge where a fine judgement between silver and bronze, say, translates into a presentational gulf.
Informing student choice
Interestingly, there is no “mediocre” or “bad” in this hierarchy, but that’s not how students see it. Bronze is no one’s idea of an endorsement. This highlights an absolutely critical issue about rankings – TEF included – which would be the case even if they were more rigorous in their approach: how do they inform student choice?
Human choices are rarely rational. They emerge from a soup of feelings and preconceptions, sprinkled with croutons of information fried in confirmation bias. When it comes to a complex decisions, such as which university to choose, we don’t devise a personal list of criteria, sourcing objective data on each, and then coolly and fairly appraising the options relatively. Instead we latch on to something that provides a basis for beliefs we already hold.
In other words, we use heuristics: rules of thumb that often bear little resemblance to nuanced realities, but which hurt our brains less. This is precisely the quality about league tables that makes them so sexy. They say, don’t you worry your head about the real differences between two institutions that are both good in their own way, we’ve made the whole process simpler. Misleading, but simpler.
The same is true of TEF. Rather than providing information that disrupts misplaced beliefs and encouraging students to examine what kind of educational experience will support their own learning, TEF short-circuits the thinking and provides a yes/no/maybe checklist.
The Government was right to shine a light on teaching (well, on learning), but not the seedy neon beam of TEF. There are other approaches and, as Dame Shirley Pearce proceeds with her review of TEF, I hope she will think boldly about options that promote diversity and innovation rather than aping league tables that suppose there is a single model of “good” and which play darts to see who gets closest.
Well said!
TEF is a lousy proxy for teaching quality and any such one-dimensional rating is totally counterproductive in informing student choice.
The one plus point is applicants probably won’t look at it…
In full agreement. There will be many ‘bronze’ institutions that make a real difference to the life chances and esteem of their students. How does any of this quasi-normative nonsense measure the real experience and context? It is really all about appeasing middle class parents. Nudge theory has a lot to answer for!
Broadly agree with thsi article, though perhaps the bigger issue than this is the self-appointed and annointed Russell Group. The impact of this group’s reach on the sector far outweighs any discussion on league tables or TEF ‘medals’. It is pernicious at all levels – with policy, funding, research, recruitment and media profile. It casts an immovable shadow across the ambitions of parents, pupils and teachers in sixth forms across the UK.
Your main assertion is correct; by and large the overall quality at most UK univerisites is above average and of international quality. The local squabbling that all these tables and measures create (NSS?!) is immense, and diverts effort away from improving the student experience and their ability to positively contribute to society after three years at university.
Universitas 21 released the 2019 Ranking of National Higher Education Systems last week. The national ranking of systems reflects the aims of higher education – education and training of a nation’s people, contributing to innovation through research, and facilitating interconnections between tertiary institutions and external stakeholders, both domestic and foreign. Perhaps this report would be more palatable to you? https://bit.ly/2rN0Y6y
Thanks for this Johnny. Of course there’s the whole separate thorny issue of subject departments versus core governance in universities which one gets to see on accreditation panel visits. So many different stakeholders in the HE sector! We need an overall quality strategy on lifelong education, which you’d think a single Ministry might provide….
Brilliant article.
Stella, I’m confused why you appear to blame ‘nudge theory’ for rankings or TEF. I would say they fly in the face of what behavioural economics teaches us about decision-making. They are heuristic and nudge people towards poorly informed choices based on stereotypical preconceptions of a ‘good’ university.
A proper nudge-based approach to university metrics would make it *easier* for applicants to use data that relates to what might be relevant to them rather than easier to use heuristics.