What’s going on with OfS’ performance measures?
Jim is an Associate Editor at Wonkhe
Tags
The suite of eleven measures are supposed to show the impact (or lack thereof) of its regulation.
There used to be 26 of them, but they’ve been slimmed down to focus on the really important stuff. In theory.
The first new one is KPM 10, which concerns student protection. This measures the proportion of students whose provider exits the market during their studies that continue their qualification (or equivalent) at another provider.
As there was only one example of market exit in 20/22 – ALRA – the whole measure is about what happened there. 268 of 284 affected students found somewhere else to study – a 94.4 per cent success rate for the regime.
The problem with the measure, of course, is that there is supposed to be a protection regime in place for a much broader basket of risks to continuation of study – course or campus closure, the cessation of material components of individual programmes, not being able to teach a particular type of student, and so on.
None of that is measured, so we’ve no idea if Student Protection Plans are working or even if they’re being invoked.
The other new one (KPM 6) is on success and progression, where the measure formerly known as “Proceed” has lost the pro, capitalised the CEED and now tells us nationally the completion and employment rate over time for full-time undergraduate students at different levels of individual disadvantage.
As has been the case for OfS KPM 5 (access), it categorises individual students into one of three groups:
- Significantly disadvantaged: As defined by commonly used measures of disadvantage including free school meal eligibility and care experience;
- Economically precarious: Students from a financially disadvantaged background but not captured by the “Significantly disadvantaged” group;
- Other: Where OfS cannot classify them as disadvantaged using the characteristics included in its measure but they may be disadvantaged according to other metrics.
The news here is that rates across all three groups have stayed broadly constant over the past three years – and unsurprisingly those classed as “other” have the highest rates (68.4 per cent), followed by “economically precarious” students (60.6 per cent) and the lowest CEED rates being among “significantly disadvantaged” students (53.6 per cent).
KPM 3 measures the proportion of students who graduate with first class degrees – this is OfS’ grade inflation metric and shows a dip from 37 per cent to 32.5 per cent YOY. Meanwhile KPM 7 measures the proportions of graduates within broad ethnic groups who achieve first class degrees and compares these to the proportion of all students receiving a first class degree.
You’ll note that that’s a change from OfS’ previous KPMs – it used to look at the gap in “good honours” (1sts or 2:1s) and used to compare white students with each of the other ethnic categories rather than comparing with the average.
That the means the results for 21/22 on this new measure look like this:
- Black students firsts: 15 percentage points lower than the proportion for all students (a slight reduction from 16.7 percentage points in 2020/21)
- Asian students firsts: 4.9 percentage points lower than the proportion for all students (up from 3.3 percentage points lower in 2020/21)
- Mixed ethnicity students firsts: 1.4 percentage points lower than the proportion for all students (the same gap as the previous year).
- Other ethnicity students: 7.5 percentage points lower than the proportion for all students (up from 6.3 percentage points the previous year).
The switch to looking at firsts makes some sense, I think, even if I would prefer to see the good honours stat in there too. It’s the switch to looking at the gap between the total and each ethnicity that’s odd – because the relative size of the populations making up the total with act to distort the figures.
The Black/White gap on firsts is 18.8pp, for Asian it’s 8.7pp, mixed 5.3pp and other 11.3pp.
KPM 8 is kind of interesting – it measures the proportion of subjects taught and the number of higher education providers (relative to population) in each English region, shown separately for full-time, part-time and apprenticeship students.
It’s interesting mainly because it’s not really clear what, if anything, OfS does about that.
All sorts of things have been dropped along the way. We were at one stage going to get a measure based on an external survey of perceptions of OfS, and while the access measure used to look at the gap in participation at higher-tariff providers between the most and least represented groups, it now only looks at the sector as a whole.
There’s a fundamental issue with these KPMs. While the metrics do relate to important issues, even if we put to one side debates about how valid these metrics are as measure of the issues they relate to there’s an implicit assumption that good or bad performance on these metrics are indicative or how effective or not OfS regulation has been. OfS says ‘our 11 key performance measures show the impact of our regulation’. That feels like a pretty optimistic view of the causal chain.
Do you know why the gap in participation between the most and least represented groups at higher tariff providers was dropped as a KPM?