University rankings are increasingly coming under fire.
Yes, critiques of the poor methodologies and perverse impacts of the rankings have long been with us.
But these critiques now seem to be turning to action.
Most recently, the University of Utrecht, a former “top 100” institution, refused to submit data to the Times Higher.
They are not alone. High-flyers Yale, Harvard, and others, recently withdrew from the US News & World Report Rankings because, amongst other factors, its approach rewards institutions for offering financial incentives to the already wealthy students with high LSAT scores, rather than those with lower scores, higher potential and actual financial need.
Global action
Institutions from Korea, China, India and South Africa have also voted with their feet, boycotting rankings on various grounds, usually protesting against some form of inequity they embed.
It’s not just individual institutions taking a stand. Institutional collectives are also turning their attention to the problem of the rankings. The Coalition on Advancing Research Assessment’s (CoARA) founding document, the Agreement on Reforming Research Assessment, has as one of its four core commitments to avoid the use of university rankings in researcher assessment. This was the first declaration of its kind to acknowledge the problematic influence of university rankings as a central pillar of any effort to reform research assessment.
The Dutch Recognition & Rewards programme subsequently instituted an Expert Group to consider how institutions might tackle the rankings, producing 11 recommendations for individual higher education institutions, national systems, and international collectives. These include participating in the International Network of Research Managers (INORMS) More Than Our Rank initiative and not using ranking consultancy services.
The European University Association, representing over 850 institutions, has also published some guidelines on the use of university rankings. These, again, endorse More Than Our Rank, and encourage universities to explain their rationale for participating in any ranking, warning against using them for institutional decision making.
Further afield, an expert group formed by a United Nations think tank, the International Institute for Global Health published a report exposing the coloniality and biases of the Global University Rankings. The group has now released a set of recommendations around raising awareness of the harms caused by global rankings, encouraging positive alternatives, and disengaging altogether from some of their extractive and exploitative practices.
Not surprisingly, the rankings are working harder than ever before to justify their place in this world. They are doing so on the basis of arguments about the value rankings bring, or by accusations that their critics just want to eschew accountability, and even claims that some people just “love to hate them.” But this defensiveness will only work for so long. If ranking organisations really want Research Assessment Reformers to start taking them seriously, and if they wish to satisfy a sector increasingly alive to the inequities and intellectual incoherence of the global rankings, they are going to need to take some action.
Doing rankings differently
The first thing ranking agencies absolutely must do is to ditch their flagship, overarching rankings that claim to identify the world’s top universities based on a single Westernised view of what “good” looks like. It’s no good introducing myriad additional subject, topical, regional, and age-related rankings in an attempt to assert you’re giving all universities an opportunity to shine, if you’re still going to produce a flagship ranking that does the opposite.
Because while these alternative products might provide a slightly different (if equally methodologically problematic) lens on university quality, they are ultimately impotent. No government uses them to decide who gets a visa, or who gets their studentship fees paid, and no recruiters use them to decide who gets appointed. Whilst the dominant crown-jewel rankings exist, everything else is just costume jewellery. Getting rid of overarching rankings will force their users to think more critically about the diversity of institutions we need in our higher education ecosystem and would permanently change the landscape.
Rankings need to make space for all universities. When critics bemoan the state of university rankings, many agencies make a big deal of the fact that they are voluntary: no-one is being forced to participate, they’ll say. This depends on your definition of forced, of course. But what they don’t say is that many are forcibly excluded. If you don’t make the grade: a minimum number of outputs in Scopus, or a certain number of undergraduates – you’re out. But this automatically creates a class system: The Rankables and The Unrankables. This is completely unacceptable. If those of us who work in the sector really believe in the transformative power of tertiary education, we cannot render thousands of global universities unrankable. It’s divisive, exclusive, and hypocritical.
I think it would be a pretty quick win to allow any university to appear on these ranking websites, either with quantitative data, qualitative data (such as a “More Than Our Rank” statement – see below) or just as a name on a page: acknowledging their existence and providing a link to their home page. If the sustainable development goals have taught us anything, it’s that to change the world we don’t focus on the haves, but the have-nots: those in poverty, not the super-rich. I’d like to propose it’s the same with rankings.
Rankings are, by their nature, full of uncertainty. There is always going to be error in the measurement of university quality: in citation counts, survey results, and the other indicators favoured by these products. The truth is, if measurement error is factored into these analyses, it will show the huge overlap between ranked institutions. An obvious way of acknowledging this is to move away from numbered lists to clusters. This feels like a big ask, but U-Multirank already does it, and so does CWTS Leiden to an extent, by attempting to provide stability intervals around their data.
Being honest about the uncertainty inherent in ranking data will make it very clear that there is no pecking order; there is no clear blue water between one university and another. Instead there are only clusters of institutions grouped by the sharing of similar characteristics (age, wealth, geography) rather than any inherent superiority.
And if ranking agencies are bold enough to admit that there is no one single ranking of institutions, only clusters which share similar characteristics, they could then turn their attention to surfacing the relative strengths and priorities of institutions in those clusters. They could move towards profiles not rankings.
One of the big objections to university rankings is the way they weight different indicators without any obvious justification for doing so. Why is staff-student ratio worth 4.5 per cent and industry income two per cent? Who says? By shifting towards profiles where all indicators are equally balanced perhaps in spider-gram form, the shape of different institutions can be more fairly compared, perhaps benchmarked against the cluster of institutions that share their characteristics (like the Knowledge Excellence Framework), and users can get a better view on the types of organisation they are dealing with.
This leads neatly onto my fifth piece of advice to the rankings, and that is to provide a qualitative complement to their quantitative indicators. One of the clear messages coming through from the responsible research assessment movement is that quantitative indicators should always be used in conjunction with qualitative approaches. The INORMS More Than Our Rank initiative offers exactly this: a mechanism by which universities describe in a qualitative way all their achievements, ambitions, and activities that are not captured by the global university rankings, contextualising the single-digit assessment they offer. For this reason the CWTS Leiden Ranking now indicates where a ranked university also has a narrative More Than Our Rank statement. It feels like a very quick win for other university rankings to follow suit.
Some rankings might say they do this already by linking to institutional websites, but that’s not the same. Linking to a website can be seen as just a way of clarifying exactly which university they’re referring to, or giving students an easy way to find out more about their courses. What I’m talking about is linking to an institution’s specific, qualitative claims for excellence to counter or complement (take your pick) the quantitative data found in the rankings.
Now what
It feels to me as though the global rankings are strong on rhetoric but weak on action when it comes to offering trustworthy assessments of the sector they have appointed themselves to judge. Times Higher recently claimed (no doubt in response to the More Than Our Rank initiative) that it believed universities are “much, much more than their rank.” A nice story – but one that needs to be backed by genuine effort to mend its ranking ways to reflect the claim.
For the first time we are seeing significant, coordinated, global pushback against the activities of rankers. I believe that if they want to continue to play a role in the assessment of universities, they will have to start adhering to the sector’s own standards on university assessment. I hope the guidance I’ve offered here might provide a starting point for their reflections.
The author is grateful to Adrian Barnett for his comments on an earlier draft of this piece.
Universities are, for the most part, charitable organisations with broadly shared objectives. Moreover, much of what they contribute to the world (especially on the research front) is by cooperative endeavour. In this context the idea of ranking them in competition with each other lies somewhere on a spectrum between nonsensical and harmful. It makes as much sense as ranking the members of a football team against each other: if the players take it seriously and start to target the rankings, many goals will be lost through failure to pass to a teammate better placed to score.
The only legitimate purpose I’ve seen for rankings is to inform prospective students in their choices, but this needs rankings of teaching quality at subject/course level. Institution-level rankings focussed on research are not useful for this (or as far as I can see, any) purpose, and the world would be a better place if the universities drove them all out of existence by not cooperating with the rankers.