Last December, HEPI published a report ‘International university rankings: For good or ill?‘ by its former Director Bahram Bekhradnia. Suffice to say, he didn’t plump for good.
To sum up his criticisms, and more particularly to add some of my own, rankings narrowly assess the wrong criteria for quality, they use indicators that are poor proxies of those criteria, they use poor quality data to measure those indicators, they use arbitrary weightings to aggregate those indicators, they make unfounded claims about the significance of their results, they’re subject to commercial influence and they’re misused and misrepresented by universities.
Rankings dress themselves up as supporting quality enhancement and student choice, while actually all they do is encourage universities to ‘game’ their data and encourage students to chase options that may be less than optimal for them as individuals.
In short, they’re a self-serving exercise in providing a veneer of respectability to anyone who – through snobbery or protectionism – wants to preserve particularly narrow notions of excellence in higher education. This is, perhaps, their worst offence. By conjuring the idea of a ‘top spot’, rankings squeeze universities into a single ideal of what ‘best’ looks like.
Better than all the rest
In higher education, there is no such thing as The Best. By pretending that there is, we discourage the very diversity that is a key strength of global higher education. Students are different. Countries are different. Universities exist for different purposes. Times change. We need a rich rainbow of ways of doing higher education to ensure it does many things well in many contexts.
But what’s to be done? After all, we do seem to love rankings – or league tables, as I prefer to call them (because it places them closer to pop charts and click-bait listicles).
League tables make good short cuts. They support insubstantial claims about quality. They provide lazy KPIs and reasons to hire or fire VCs and rectors. They provide justification to institutions that charge stratospheric international fees and students that pay them. Bekhradnia describes resistance as “Canute-like”.
The solution to league tables, though, is not fewer rankings, but more. Taken to the logical extreme, I would like to see everyone have their own ranking for whatever purpose they intend, whether they’re benchmarking departmental performance against similar providers globally or they’re a particular student with particular needs choosing the best university for them to study a particular course. While ‘The Best’ is a lie, ‘best at something’ can provide insight.
Enter U-Multirank, about which Bekhradnia says:
U-Multirank does not produce a single list but allows users to create their own rankings based on criteria that they select rather than those selected by the rankings compilers.
I am most definitely writing this article in a personal capacity and these are my opinions. However, I am a consultant to the consortium behind U-Multirank, which comprises mostly academics and experts in higher education metrics. As such, I’m really not comfortable with it being described as a ‘ranking’. Rather it’s a beast of a big data set with a powerful tool to build comparisons. It’s not so much a ranking as the antidote to rankings.
This month, U-Multirank published its fourth annual dataset – the largest yet. It covers 1,500 universities, 99 countries, nearly 3,300 separate faculties and over 10,000 study programmes. At the level of whole institutions, U-Multirank uses an array of 31 separate indicators. And when you get to comparisons of subject areas, the number is even higher.
In some countries, there’s basically blanket participation by all their HEIs. For them, U-Multirank’s key revelation is a free and detailed analysis of institutional and departmental strengths and weaknesses compared to other institutions that the user can select as similar or worth comparing.
Rather than a league table that says they’re just not massaging the figures enough, this is a tool that helps get to heart of what good performance looks like in research, in teaching and learning, in internationalisation, in knowledge transfer and in regional engagement.
You might say that because I do work for U-Multirank, I would say all this. It’s a fair criticism, but actually, it’s the other way around: it is precisely because this is what U-Multirank was trying to achieve that I have worked to support its development.
So why isn’t U-Multirank more popular in the UK? Why do most VCs I talk to about it either not know about it or dismiss it as an annoyance? I’ve got five reasons:
1 – It’s European
In case you haven’t noticed, the UK tends to be a bit down on initiatives emanating from Brussels, and it’s true that U-Multirank has received most of its funding from the European Commission. Nonetheless, it has always operated independently and, as it happens, it’s now shifting to include funding from a wider set of global sources.
The EC supported it to find an alternative to traditional rankings and some people think that Brussels had no place interfering with setting up academics to undermine (or even compete with) private businesses that make money out of league tables. When those commercial interests damage vastly larger investments in HE by the EC and globally, it seems like public money well spent to me.
2- It’s just not sexy
It has to be admitted, a simple answer usually trumps a complex truth. As it goes against U-Multirank’s principles to start bandying the word ‘best’, it struggles to find memorable messages for the public at large.
As its reach grows though – it’s more than doubled since the first data release – U‑Multirank, sexy or not, will become harder to ignore as a key comparison tool for HEIs, governments, research funders and students.
3 – The media don’t like it
Some international media lap up the richness of U-Multirank data, delving deep to find all sorts of interesting patterns. Most, however, not so much. Many have their own rankings albeit, like the Guardian and Times, at a national level. No wonder they aren’t splashing headlines about an unsexy anti-ranking ranking.
But even the Times Higher pointedly ignores U-Multirank while giving endless coverage to every spin of its own league tables. It could be accused of lacking journalistic impartiality on this. You might say that. I couldn’t possibly comment.
4 – Participation takes effort
Most rankings draw only from publicly available data, which means universities can ‘participate’ without lifting a finger. Given the burden of data demands in the UK and elsewhere, inclusion by default is both practically and politically convenient for universities.
While U-Multirank does use public data, many – but not all – of the metrics that helped it take such a different approach are collected proactively, relying in part on self-declaration by the universities or participation in a student satisfaction survey.
In its early years, just over 30 UK universities decided it was worth the effort. More were included, but with public data only. With its latest release, however, U-Multirank has prepopulated data from HESA and other sources for UK universities. The process of actively participating is now easier. Even default inclusion leaves far fewer gaps.
5 – Who needs it?
UK universities that do well in the league tables don’t need another one to help them attract international students. As for the universities that don’t, it’s easy to understand a jaded attitude towards them.
However, it’s exactly these universities that don’t normally figure at the top of league tables – because their excellence does not conform to the Oxbridge/Ivy League model – that have most to gain from a comparison tool that recognises that whether you’re excellent depends on what you’re trying to do.
In a post-Brexit world, where ever more UK universities may be placing neon ‘vacancy’ signs in their international windows, it’s hard to see why any should be complacent about a tool that enables fair comparisons.
U-Multirank is not perfect. There are important performance criteria that can’t be measured or for which good quality data is not available, but, in a world where league tables create a drive towards an educational monoculture, it’s a valiant champion of the need for universities to have diverse missions. This approach is causing shockwaves. Many governments – in Spain, for example, where they’ve adapted data gathering to align with U-Multirank – are looking to richer measures of excellence than the league tables allow. Funding decisions can rest on such things.
Meanwhile, even the traditional rankers have sat up and taken note. I am confident Phil Baty of THE’s World University Ranking would argue that the decision to start producing a range of different rankings – young universities, Asian, African, reputation, employability – had nothing to do with U-Multirank’s appearance at the same time. Who knows? That may be true, but if U-Multirank has helped sweep the tide in a different direction, then it’s a solution to the problem of rankings that Canute might have done well to consider.