This article is more than 4 years old

Our days are numbered – how metrics are changing academic development

Roni Bamber's new monograph for SEDA explores how the rush to metrics is changing how academic developers spend their days.
This article is more than 4 years old

Roni Bamber is professor emerita of higher education at Queen Margaret University, Edinburgh. 

“Daft management decisions in response to poor results” are “the worst thing about my institution’s use of metrics,” said one academic developer. It’s not only academic developers who are vexed; mentioning metrics to anyone, from senior management to the most junior academic, is often met by groans.

The work of academic developers – like that of many others in HE – is increasingly driven by metrics. Metrics, as distinct from data in general, are those quantitative measures used to assess, compare, and track performance, often combined in a dashboard as a tool for managers to work out whether goals and strategies are being achieved.

UK universities are assessed in the Teaching Excellence Framework, for instance, against a complex set of metrics on student satisfaction, continuation, and employment outcomes. As well as feeding into national evaluation systems, metrics are often accessible to all in the public domain, so the outcomes of, for example, the NSS, TEF or REF are of great consequence. For academic developers, metrics-driven work is gradually taking precedence over other activities.

During my forty years of working in HE, twenty of them as a head of academic development, so much has changed, largely due to intensifying managerialism and accountability. Metrics have featured strongly in those changes. I have welcomed evidence-informed practices – in theory – while hating how metrics were often used in practice. When the Staff and Educational Development Association approached me to write a monograph on metrics I saw an opportunity to find out if others were experiencing and thinking the same.

Metrics and the modern condition

The research for Our Days Are Numbered included reviewing the literature, followed by a survey of heads of academic development, asking about the impact of metrics on their work. Data was gathered before the pandemic from more than 70 heads, representing around half of UK HEIs, and included seven extended case studies.

Some expressed disgust: “The sector seems to use metrics much as a drunk uses a lamppost – for support rather than illumination.” Experiences differed between institutions, of course, but a pattern emerged. Academic developers were seeing increased institutional focus on improving metric scores and league table positions, gaming the system, concentration on quantitative at the expense of qualitative data, and staff workload pressures.

Academic developers were often opposed to what I call “metrication” – the systemic use of metrics primarily for managerial and accountability purposes. Nonetheless, they felt drawn into the metrics game, directing their energies not so much at enhancement but at supporting a pyramid of local, institutional and national data, connected to national evaluation systems and rankings.

They gave examples of unintended consequences, like skewed institutional priorities and unproductive channelling of resources into metrics-related activity. Often, ethical challenges were at the root of their concerns, with some convinced that metrication had distorted institutional focus, for example in the monetisation and marketisation of education. As one said pithily: “Ka-ching! We need a strong, globally competitive brand to attract students (and their fees).”

Making metrics work

Comments weren’t all bad, with some respondents expressing acceptance and approval of metrics. Views on how institutions are using metrics are more complex and nuanced than the most extreme quotes imply.

Respondents distinguished metrication from the appropriate use of data for routine monitoring, reporting, research, student support and enhancement purposes. Many made positive claims for the appropriate use of metrics: it has raised the profile of learning and teaching, aided better understanding of students’ experiences (as opposed to “the student experience”), and leveraged enhancement.

There were many examples of shaping metrics-related work productively as a practice-facing opportunity to improve things for staff and students, rather than as a tool of managerialism or institutional self-promotion. Developers had gained fresh insights and saw the value of new models of evidence-informed practice in enhancing learning and teaching. In one example: “The metrics around the BAME attainment gap have helped to convince colleagues that there is work to be done here.”

Metrication has meant that developers’ activities have been changing – supporting better NSS scores in assessment and feedback or TEF preparation leaves less time to do generic academic development or undertake research. Top-down demands seem to be supplanting activities to support individual academics and programmes.

If what developers do has changed, so has how they do it. They are doing a lot more work collaboratively with other professional services. Who would have thought academic development departments would work in partnership with Business Intelligence, IT, Timetabling and Student Services, perhaps all at the same time?

Glimmers of hope

My research confirms what I had suspected, that metrication has had a significant impact on the work of academic developers. There are real challenges to our time-honoured ways of thinking and practising. Such challenges can have positive results, but much work needs to be done to turn metrics from a tool of managerialism into a practice enhancer.

Our days are numbered features critiques of HE metrics in the literature and from academic developers. It recognises what Wonkhe writers tell us regularly: that there are no perfect metrics, within perfect infrastructures, to assess universities perfectly.

It acknowledges that it would be naïve to dream of governments dismantling systems of measurement and evaluation of HE: the current review of the NSS is part of an evolution, not revolution. But that evolution could perhaps lead to better use of metrics, helping us to understand, represent, and improve students’ learning experiences.

The research shows that developers are doing this by constructively challenging institutional uses of metrics, reframing frustrations towards positive, enhancement-related outcomes. Respondents described learning to “swim with the metric tide” while endeavouring to maintain their principles, of the conundrum of translating value for money in the form of graduate outcomes and salaries into value added for students, and of “new sight lines” into students’ different experiences and outcomes.

Academic developers provide positive, affirming academic leadership (rather than “naughty step” treatment of under-scoring staff, programmes or institutions) with values-based thinking and doing. They are working collaboratively across their institutions to agree a framework and route through metrics, turning the tide in often subtle ways.

So there are glimmers of hope for the future of evidence-informed academic development in the metrics era. Developers have strong academic values, and will keep using metrics to enhance practice by listening to what students are telling us, and helping senior management to treat learning and teaching needs with due seriousness.

They are also pragmatic, so they will simultaneously support institutional agendas, reiterating those values while trying to avoid what one developer described as “quick wins in knee jerk responses to students’ likes and dislikes, rather than genuine engagement with students or with educational rationales.”

2 responses to “Our days are numbered – how metrics are changing academic development

  1. Hi Roni

    Similar thinking here:

    Bernard Lisewski (2020): Teaching and Learning Regimes: an educational developer’s perspective witlhin a university’s top-down education policy and its practice architectures, International Journal for Academic Development.

    To link to this article: https://doi.org/10.1080/1360144X.2020.1831505

    This paper describes the Teaching and Learning Regime concept,situating it within top-down university policy implementation and its possible interactions, with bottom-up disciplinary cultures. It argues that top-down policy implementation needs to acknowledge the importance of disciplinary practice architectures and the enablements and constraints they present. Policy will manifest itself
    in different ways in Teaching and Learning Regimes because it will be filtered through a variety of cultural components or ‘moments’. It concludes by explaining the implications for educational developers acting as ‘cultural workers’ within the dynamics of top-down institutional policy implementation and academic practice architectures on the ground.

    Praise be to the Ed Doc programme at Lancaster University!

    Best wishes
    Bernard

  2. Hello Bernard and Roni,
    Ah, the good old ‘implementation staircase’ in which policies instigated at the top of the HE structure get adapted, challenged etc. as they travel down the structure, so that by the time they reach ‘street level’ (where learning and teaching actually happens) they can be unrecognisable from what was first proposed.
    Kind regards to both
    Paul

Leave a Reply