Student outcomes are in the news again in the UK. In recent weeks, the Office for Students (OfS) has published proposals to improve student outcomes as well as proposals for the next iteration of the Teaching Excellence Framework (TEF), while Universities UK has released its new framework for programme review.
OfS has also put forward proposals to revise the question set in the National Student Survey for 2023.
University rankings have sought to fill aspects of this gap – measuring and comparing quality and excellence, and providing meaningful information for students, parents, and the public.
How well rankings have performed this role has been the subject of ongoing discussion and debate. What have we learned about rankings and how they measure quality and student performance?
On the eve of their 20th anniversary, the Research Handbook on University Rankings: Theory, Methodology, Influence and Impact – in 37 chapters written by contributors from around the world – provides a comprehensive review and analysis of the influence and impact of rankings, and offers much food for thought.
Data lakes versus data puddles
The use of data and quantitative indicators for measurement and comparison stretches back to the foundation of the modern nation state in the late 19th century. Over the decades, and especially since the 1980s, there has been a proliferation of different types of rankings, ratings, and benchmarking instruments to drive, monitor, and evaluate actions and outcomes across all aspects of public life.
Tero Erkkilä and Ossi Piironen refer to a “global field of measurement that concerns knowledge governance more broadly.”
Because only one university can ever rank Number One, rankings foster global competition for “scarce symbolic capital”, such as global reputation for performance. However, to do so effectively and profitably, they need to be “regularly published”, say Jelena Brankovic, Leopold Ringel, and Tobias Werron.
This helps explain the many different rankings often salami-slicing the same data. Today, according to IREG Inventory of International Rankings, there are 25 global rankings and, at a rough guesstimate, upwards of several hundred primarily nationally based rankings.
In the process, “vast data lakes” have been created, says Richard Holmes. Times Higher Education boasts that its Data Points Portfolio holds nine million data points from 3,500 institutions from more than 100 countries.
Coates et al, George Chen and Leslie Chan, and Miguel Antonio Lim have examined these developments in some detail, showing how they contribute to an ever consolidating higher education intelligence business which Daniel Guhr describes as a “massive business serving the 1 per cent.”
Measuring quality
Rankings rely on vast amounts of data on research and reputation. Yet there is much we do not know about higher education system performance, say Claudia Sarrico and Ana Godonoga. For example, we have only a partial understanding of what constitutes student success. And, we ignore the fact that differences are often greatest within rather than between institutions, an issue affecting all education levels as a recent report from Australia illustrates.
One of the most contentious areas is teaching and learning. Rankings rely heavily on the staff-student ratio.
However, research consistently shows the quality of teaching is far more important for student achievement than class size. Zilvinskis et al, and Kyle Fassett and Alexander McCormick show us there is no correlation between this ratio and teaching quality.
To get around this problem, the UK, beginning in 2014, undertook a learning gain study to better understand how students acquire knowledge, skills, and personal development.
To fully understand the learning process requires significant time and investment. Without that, says Camille Kandiko Howson, rankings “settle for proxy measures of varying quality, including salary data, student satisfaction or institutional reputation.”
Internationalisation is a driving force in higher education but the indicators used do not measure what we think they do. Blanco et al argue there is no agreed definition or data; instead, rankings rely on “self-reported information by institutions” which is open to “manipulation”.
This has heightened the indicator’s strategic value and encouraged universities, and governments, to focus disproportionately on international students as “cash cows”.
It has also encouraged universities, and academics, to abandon their social responsibilities in preference for global reputation and prestige.
Learning lessons
Despite all these shortcomings, rankings are hard to ignore. Governments and institutions continue to be drawn in by the tantalising hope that, by taking certain actions, they can alter the inherently unequal relationship between diverse systems and institutions, and rise to the top. Too often rankings have encouraged governments and universities to adopt the wrong approach.
Participating in global science is a noble ambition – just look at the success of the international search for vaccines and other public health responses to the pandemic.
But too often, the push for global visibility has driven “world-class universities to collaborate with other top-ranked institutions in other regions of the world, instead of engaging closely with their local community,” says Jamil Salmi.
We are keenly interested in the impact and benefit of research but, says Robert Tijssen, rankings do not focus on a “university’s innovative inputs, outputs or impacts. This mismatch is a fundamental problem.”
Policy emphasises the necessity to widen access and pursue greater equity, diversity, and inclusion but the rankings don’t include this type of information either, say Perna et al.
In circumstances where higher education institutions have “shown themselves unwilling to tackle institutional transformation,” global rankings could act as a motivator, suggests Pat O’Connor, but instead they have chosen to endorse narrow exemplars of world-class excellence.
There are some tantalising signs of change but it is important to realise rankings are a very successful business model.
Following years of criticism, there is declining interest in the use of citations and journal impact factors (JIF). However, it is likely rankings will find it hard to abandon this practice given their close alignment with definitions used by Web of Science and Scopus. It is not clear either how rankings will respond to open science or calls for responsible metrics.
There is also a growing search for alternatives and better ways to reflect the social and economic role of tertiary education.
The Times Higher Education Impact Rankings is such an example but it is predominantly a research ranking masquerading as SDG-washing exercise.
The EU-sponsored U-Multirank is more meaningful but it is also a more complicated ranking. It allows each individual to choose indicators most appropriate for them, thereby side-stepping a simple answer to the question “which university is best”.
If nothing else, rankings illustrate some of the profound difficulties in using simplistic quantitative indicators to get at and explain complex issues.
As Robert Kelchen suggests, lessons should be drawn from Campbell’s Law. It states that “the more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social process it was intended to monitor.”
Rankings have also alerted universities to the importance of data collection and data capability, thereby elevating the role of institutional research.
Once governments and universities realise that being excellent requires more than simply climbing the rankings, they will realise that having the appropriate policies and strategy is a complex self-assessment and benchmarking process.
As Stride et al propose, universities will develop conceptual models “tailor-made to answer specific questions, …[to] ensure that the university fulfils its missions as successfully as possible”. Some sensible lessons to keep in mind as the UK moves forward with its reforms of TEF and the NSS, and a new framework for programme review.
Research Handbook on University Rankings: Theory, Methodology, Influence and Impact is edited by Ellen Hazelkorn and Georgiana Mihut.