This article is more than 3 years old

Winners and losers in the game of journal rankings

Alternative metrics that better reflect the attributes of good quality research are needed, argue Valerie Anderson, Jamie Callahan and Carole Elliott
This article is more than 3 years old

Valerie Anderson is Professor of Human Resource Development and Education at the University of Portsmouth


Jamie L. Callahan is Professor of Leadership and Human Resource Development at Northumbria University


Carole Elliott is Professor of Organisation Studies, The Management School, University of Sheffield.

Journal ranking lists are an inescapable feature of academic life. They afflict academics regardless of their discipline or level of seniority.

Academics feel the after-effects of rankings fever in how they apply for promotion, the outcomes of their journal submissions, work-related stress and distress, and their ultimate career trajectories.

Journal ranking list publications produce winners and losers each year. Those who are already privileged suffer less than those with less power, influence, and resources at their disposal. Journal rankings affect the work priorities of authors, editors, publishers, and university managers.

Impact on research

Reflecting on our experience as journal editors in an applied field of practice (Human Resource Development) whose editorial tenure is now completed, our sad and regretful conclusion is that, in spite of our editorial initiatives and endeavours, the dominance of journal ranking lists stifled our ability to promote and publish practice-based, engaged scholarship and impactful research. Once we completed our editorial tenures, we committed ourselves to challenging the deleterious effect of journal rankings.

Our objections to the current journal rankings system are threefold. Journal rankings privilege the interests and scholarly audiences of the Global North. Ranking lists sideline practice-related research in applied fields, in whatever part of the world it takes place. And the prerequisite for papers submitted to top ranked journals to deal in conceptual or abstract theory inhibits innovation and research impact in practical organisational and social contexts.

Ironically, promoting and legitimising our challenge to ranking systems would not get far unless we were able to publish our ideas in paper in a top ranked journal.

Our paper, published in Academy of Management Learning and Education, appeared just as the pressure was on for academics to prove their worth as being “REFable”. We argue that new and mid-career academics lack the power and agency to do more than submit quietly to the ravages of “ranking fever”.

While we recognise that ranking lists are here to stay, we call on senior scholars and HE managers to show radical leadership and challenge the assumptions built into the algorithms that underpin the current journal ranking lists.

A time for action

Now that the dust is settling on REF 2021, we contend that the time is right to identify alternative metrics for ranking lists that better reflect the attributes of good quality research and take account of a wider range of indicators of impact and relevance.

It seems likely that the next research excellence process will up the stakes for research impact. Analysis of evidence from impact case studies in REF 2014 identifies how impactful research requires interdisciplinary and practice-related networks and logics. Applied fields are as important as pure science in networks that generate research impact.

There has never been a better time for senior scholars to seize the initiative. Incentives abound in academic systems for a focus on publishing in top ranked journals. If the academy is serious about impact then why not incentivise researchers to build partnerships with practice communities?

We must take action to rebalance the status and rewards for work connected to pedagogic innovation and increasing curriculum-focused links with practice communities. Complaints about teaching quality and anxiety about future teaching excellence and student outcomes continue to take centre-stage.

There has never been a better time for HE leaders to develop reward strategies for teaching and pedagogic development activities, equivalent to those for research and theory development.

Signing up to the San Francisco Declaration on Research Assessment (SF DORA) has become the thing to do in many UK universities. The jury is out about the effect that this has on academic practice and priorities. But the situation is not hopeless and we remain cautiously optimistic.

As the new academic year approaches, it would be nice to think that our HE leadership colleagues have the courage and creativity to lead a recalibration of journal ranking lists and find effective ways to support and incentivise impactful scholarship, research, and teaching.

2 responses to “Winners and losers in the game of journal rankings

  1. Time for a longitudinal “impact” study of health, economics and environmental journals over the last 20 years to find out which ones had the highest/lowest proportion of articles that correctly predicted the risks and responses to a global pandemic, to a global financial crisis, and to the dominance of climate change and the ecological crisis for the remainder of the century. Then we should aim to embarrass and close down the ones with the lowest proportion that failed to predict those issues accurately.

    That would be a set of metrics worth using.

    We can then move onto all the other issues where our academic establishment has been too busy chasing what the funders want, but has got things so wrong …

  2. There can be no doubt on the need for a reset, and rethink on this. Journal rankings are like evidence hierarchies, necessary, but insufficient in reflecting the true value of research. But it feels a bit rum for ex-editors to be exhorting the sector to do this work, journals themselves and those who publish them must do more. The metric tide report recognises the need for responsible and diverse metrics, and a good place to start this reset. We, the authors might create the product that leads ultimately to the metrics and rankings, but we don’t create and publish the rankings.

Leave a Reply