Awarding gaps are quite rightly again in the spotlight. As Wonkhe’s Jim Dickinson highlighted recently, as “degree inflation” has decreased we’ve seen a reopening of the awarding gaps we knew about pre-pandemic.
The publication of the most recent set of data by the Office for Students in March showed a decrease in the award of good honours degrees accompanied by a reopening of the gaps for most groups, in particular students from the most deprived backgrounds and non-white ethnic groups.
This was not a surprise to many of us who have already looked at our own data in detail and in particular looked at the firsts gap which persisted or even increased during the same period.
It is vital that we understand and address the inequalities in our system – but there are two big problems with the way we talk about and focus on awarding gaps. The first being the “award” bit and the second the “gap”!
Emphasis on action
It was right to move away from talking of the “attainment” gap, but the rebrand might be just as bad.
Focussing on “awarding gaps” implies that the problem is a function of the way we award degrees but this is simplistic and places the emphasis on action in the wrong place.
Many across the sector are in agreement that these gaps exist right across the breadth of the student experience. Jenny Shaw wrote here recently about the belonging gap. If we consider the point at which students enter university there is often no gap in attainment between demographics and yet by the time we award the degree there is a significant gap.
When we look at our data we can see the gap emerge during the first year, opening up between the point of entry and the end of stage 1 and then persisting until award. So this is at least as much a transition gap as an awarding gap. By focussing on the gap at the point of awarding a degree there is a temptation to focus solely on the way we assess, grade and award.
There is some excellent work out there on fairer and more equitable approaches to assessment such as through AdvanceHE, but if the problem was just about assessment it would be a relatively easy fix.
A stitch in time
The other issue with thinking about award is the timescale. This most recent group of students for whom the data was released and where we can see the reopening of the gap started their degrees in 2019, pre-pandemic in a different landscape to the one in which we operate now.
The 2020 and 2021 graduates for whom the gap appeared to have narrowed did much of their degree in a pre-pandemic world and the rest in the most unusual of circumstances.
Can we really extrapolate from their experience across the entire duration of their degree to the students who joined us last September? Can we meaningfully compare pre-pandemic or during the pandemic experiences to students who will be starting in September 23?
We can’t wait until current students gain their degrees in 2025 or later to find out whether we’ve changed the demographic inequalities they may be experiencing now.
We must consider the gaps in a much more timely manner. Is the stage 1 attainment of the 2022 entry cohort following the same trends as those who have gone before or have we managed to make changes to address those inequalities? If not then we must do more now both for this cohort and the 2023 entry.
We can’t wait until award to address the inequality students experience because by then it’s too late.
Mind the gap
Then there is the problem of “the gap”. The gap is a difference in attainment at a somewhat arbitrarily defined grade boundary.
As we’ve seen, by moving that boundary we can appear to address the gap when in reality all we are doing is masking the inequalities by playing with the boundary, or a statistical trick as Jim Dickinson would call it.
We need to do better with our data and look across the entire grade distribution.
Is the shape of the distribution the same across all demographic groups? Do the grades for all demographic groups follow a broadly normal distribution, with the majority of students in the middle of the range and tails to the very highest and lowest grades? Is the top of that distribution in the same place?
The answer to the latter is clearly no and therein lies the problem. If we are really to say we’ve addressed inequalities in the system we must ensure that the distribution is the same across all demographics otherwise how do we know whether we’ve really addressed the inequalities rather than simply playing with the statistics to change the narrative.
We have to understand our data and grade distributions across the entirety of the range not just at the conventionally defined boundaries.
Digging deeper
To make real progress addressing both awarding gaps as defined by OfS and also the broader inequalities across our system, we must take a more comprehensive look at all the data available to us, not just that in the published dashboards.
We must use that data to track the emergence and persistence of gaps as they happen and to analyse the effectiveness of interventions at the point of action.
We need to know quickly if something we’ve done is working to address inequalities so that we can make robust decisions to stop or continue that work and not wait until it’s too late for our students before we understand the impact of our policies. We also need to be braver at sharing with each other both the effective and less effective things we’ve tried.
As John Blake highlighted at the Secret Life of Students recently, we don’t do enough robust analysis nor enough sharing of both successes and failures to learn collectively as a sector the best ways to support disadvantaged groups.
Quality assurance is highly data-led but this often only considers a subset of the data available and doesn’t take in to account the limitations of that data or confounding factors. When addressing awarding gaps and inequalities we need to take a robust data-informed approach, using all the data available in a timely manner whilst acknowledging the limits to what we can do with it.
We know there are inequalities in higher education and I don’t know anyone who is not committed to trying to address the issue and make HE fairer for all, but we must be honest with ourselves about the entirety of the problem, understand all the data at our disposal and not just focus on the reported metric of the awarding gap.
The issue with this article is what is exactly “all the data”? Most of us have limited intersectional data. My attainment gap have been there for past 6 years at least before I started working here. More detailed analysis indicated an association with IMD whereas another course may be associated with a higher intake of Btec for that year. It’s a changing target. It’s clear there are multiple factors both academically, demographically and the learning environment created. However, it doesn’t take away the fact that there is an attainment gap between white and non-white students in A Level and BTEC cohorts. For me, assessment types and phrasing is a key component. The transition gap also a critical element.