This article is more than 3 years old

The debate over no detriment hasn’t gone away – it just got started

How are universities doing on giving consideration to the disruption caused by the pandemic and protecting standards? Different things, finds Jim Dickinson
This article is more than 3 years old

Jim is an Associate Editor at Wonkhe

Given some of the dross that lands on our desk here at Wonkhe towers, you’ll forgive me for having low expectations when I was altered to new research on students and perceptions of value from… vouchercodes.co.uk.

Notwithstanding that I can’t see data tables, population and weighting, and things like whether we’re just looking at undergrads or whether we’re talking UK wide, the fact that it’s YouthSight and that the sample is of 2,022 means I’ll run with the ball for now.

There’s actually lots of interesting findings in the write up – but most interesting to me is the finding that “most” students believe that their final university results will be affected by the pandemic.

Of those surveyed, 45 per cent said they believe that Covid will have/had a negative impact on their final results, and one in five students (15 per cent) believed this strongly.

If they’re right, the principle of students not experiencing a “detriment” due to the pandemic has somehow failed. And even if they’re wrong, we’ve somehow as a sector failed to convince them of the fact. So what’s gone wrong?

We don’t know what’s around the corner

In some ways the near-universal principle in play here is pretty simple. First, where a student’s performance in assessment is affected by unexpected events that are beyond their control, they are generally able to ask their higher education provider to take these circumstances into account.

If the provider does that, in order not to compromise academic standards, the aim is then to give students a fair crack at showing that they can reach those standards, rather than to lower them. Yet at the same time, it is also reasonable to expect students in general to be able to cope with normal life events, to manage their workloads properly, and to expect a level of pressure around assessments.

Now clearly universities are autonomous, different courses are assessed in different ways, and some professional, statutory and regulatory bodies get a bit funny about qualifications that they accredit. But the above principles are accepted, almost universal and the de facto standard applied by the Office of the Independent Adjudicator (OIAHE) in its Good Practice Framework when adjudicating complaints.

So on the basis that we should always be worried when students compare their experiences of their providers’ interpretation of “fairness” and find significant differences, how did we end up with such wildly differing arrangements to achieve the above principles across the sector?

It’s laughter, love and joy in disguise

The problem here begins back in the spring of 2020, when a wave of student comparison and petition signing put significant pressure on most providers to agree something called either a “no detriment” policy or a “safety net”.

Before we continue – even if you think you know what those two terms mean, I guarantee that I can find you ten universities that have ten different interpretations to yours. But in almost all cases we’re looking at a mixture of six significant interventions:

  • In some cases, universities agreed to implement a change in the algorithm that ensured that students could do no worse in assessment than they’d done pre-pandemic. That was tricky for some courses where there hasn’t been much (or “enough”) assessment up til then and not popular with some PSRBs.
  • In some universities, additional academic support was put in and assessment deadlines moved en masse to enable more students to reach a particular standard by the time the exam was run or the deadline fell.
  • In many cases the “needle of trust” shifted, allowing more students to have more applications for extenuating circumstances accepted with less or no proof/evidence.
  • In some cases, cohort scaling was introduced, making clear to students that if a cohort’s marks were below average, some algorithmic adjustment would be implemented. The problem with making cohort adjustments for single individuals – exemplified by the examnishambles of summer 2020 where A level students found their marks lower than expected thanks to an unseen algorithm – was probably not priced in sufficiently. Especially where some universities left open the prospect that a cohort’s marks could be scaled down as well as up.
  • In a number of providers previously capped resits became uncapped – causing some students to wonder whether to gamble on attempt #2.
  • And in some providers, guarantees were made that anyone with borderline marks that would usually be considered by an exam board would automatically be nudged up not down.

In some ways, in those early days, it didn’t particularly matter that different providers were taking different approaches. It was the fact that a “no detriment” or “safety net” policy was agreed that mattered. It’s what happened next – and what is happening now – that matters much more.

It’s the greatest feeling in the world

The most obvious and significant moment in 2020-21 came in January from the Russell Group, which collectively and unilaterally agreed a line that said that algorithmic approaches to the problem were no longer necessary or appropriate.

There were various reasons for that – the group’s statement argued that algorithms would not be possible given the scarcity of pre-pandemic benchmarking data available for many students. It also hooked back onto the protection of academic standards and upholding the integrity of degrees.

If you were cynical, you’d observe that a sector that was told it could not rely on “force majeure” clauses for delivery failure when faced with calls for tuition fee refunds couldn’t possibly admit that some of its delivery wasn’t up to scratch – so solutions had to be framed as individual not collective, and a way of correcting something unfortunate that had happened on the student side of the partnership rather than the university side. So some changes to extenuating circumstances stayed. But changes to algorithms didn’t.

It’s a never ending game

The problem with that position is that the sector didn’t agree. From an attainment perspective, the signal the Russell Group sent out at that stage was that to the extent to which there was an issue with student performance, it was contained by its policies to A/Y 2020 – with last academic year a “normal” year and only extreme, individual Covid impacts to be addressed in the usual ways.

Meanwhile other parts of the sector did other different things. Some providers agreed that students graduating this past summer would be assessed on a “best of” basis. Others decided that as autumn term 2020 was less disruptive than spring term 2021, a guaranteed grade floor would be applied halfway through the year. And having shifted the needle of trust towards students, some kept it where it had moved to, and some yanked it right back.

Somehow, for two housemates at two universities in a big city, it was automatically accepted that Covid was harming your ability to attain last year in university A, but it had to be extreme and you really had to prove it in university B.

And what all of that means is that for students who are due to complete their programme in the summer of 2022, is that the differences have gone from hard to spot in an emergency, to deep, significant and pretty much unjustifiable for my two housemates from different universities’ perspective.

So let it carry you away

That would all be bad enough – but what’s arguably even more worrying is the extent to which decisions beyond the initial emergency ones appear to have been taken without the benefit of a proper understanding of what happened and why.

For example – you can make a decent case that a prolonged lockdown will have had sharply different impacts on different students. Some – with little else to do and plenty of financial and emotional support to hand – may well have done much better than usual, albeit with some mental health impacts, extra-curricular loss and an absence of practical experiences. Meanwhile you can make a decent argument that some – hit hard by lack of support, money, wifi and space – may well have done much worse than usual.

“No detriment” policies that were working well in the circumstances outlined above ought to have resulted in significant grade inflation as we took steps to remove the detriment from the latter group. But what I’m hearing is that grade inflation wasn’t hugely significant in a large number of providers and courses this summer just gone. That suggests to me that something has gone wrong, not right.

Meanwhile, outside of some individual (often confidential) projects being run internally and a small number of public collaborations, there’s been a dearth of sector-wide analysis on impacts on students with particular characteristics or particular courses. Even the analyses that argue it was the changes-to-assessment-wot-done-it rather than a “no detriment” policy don’t seem to be routinely controlling for the additional confidence boost that a “no detriment” policy might have given a student going into an exam or final assessment.

I’d be really interested to see which sorts of students on which courses “took the gamble” on that uncapped resists issue. We probably need to know if students knew they could “mit circ” if they had a crap chair or desk or a family of five sharing the laptop – if indeed their university accepted that as an issue. We need to know more about why attainment gaps narrowed in some cases. And so on.

Even the shift of the needle of trust is being implemented in wildly different ways. Some students can now “self cert” for an unlimited number of times. Some can do so twice. Some can’t at all. And even when they do, they get different remedies. Some get four more days to submit work, some a fortnight. Why? No-one can say.

As well as all that, we then have to mix in the issues presented by shifts to online assessment. Again there’s little out in public on this – but in many cases what could be a positive story about some forms of assessment helping to correct for prior poor attainment are accompanied by murmurings of dramatic rises in assessment offences – with resultant responses either doubling down and souping up the arms race of invasive and morally problematic proctoring or snapping back to high stakes paper and pencil in a draughty room.

And don’t get me started on the mit circs arms race. Here at the Watford home working branch of Wonkhe towers, I spend a lot on our home broadband, but even so “My internet died just as the timed assessment started” would be a highly realistic excuse. The problem is, it’s also a very easy one to claim when untrue.

The Office for Expectations

To the extent to which there’s been leadership on this nationally, in England OfS did that thing it always does back in January, where it said you have to do multiple things at once without actually engaging in the issues. It said:

  • Give adequate and sympathetic consideration to the disruption caused to students’ learning and experience since the beginning of the pandemic.
  • Ensure that standards remain secure.
  • Ensure that unnecessary burden is not placed on students, for example by requiringsignificant numbers of individual students to rely on mitigating circumstances policies where they have all been affected by similar issues, or by allowing submission deadlines to bunch together for individual students.
  • Make reasonable efforts to consult with their students as they develop and implement their approach.
  • Consider your statutory obligations under the Equality Act 2010 when reaching decisions about the actions they will take.

Yes, it is true that this is basically the list of things people put on a post-it note at the start of a brainstorming session on “what to worry about”. No, this isn’t helpful in working out how to balance, integrate and synthesize these things. And no, it doesn’t address wider questions like the purpose of assessment or the viability of a culture that assumes that everyone can complete at the same pace. Even if you assume that’s desirable – many don’t – it probably isn’t possible yet. The pandemic isn’t actually over and its impacts certainly aren’t.

The more the world is changing, the more it stays the same

All of which means that here in September 2021, despite talk of the sector “taking the best of the pandemic” on the teaching side, plenty of parts of the sector seem to have snapped back to pre-pandemic on all this no detriment stuff, and some might have extended their arrangements too far.

Some are saying to students “we know 2020-21 was rotten so we’ll calculate you based on the best of this year and last” and others are saying “no changes to the algorithm”. Some are saying that they’ve learned to trust students when they say they were impacted, and some are saying the opposite. And in a lot of cases, people are making those decisions based either on confidential internal analysis, or worse still, no real analysis at all.

What’s also becoming clear is that debates over what assessment is for and whether students should all be expected to complete on the same timeframe have gone from “interesting” to “long overdue”. And a UK-wide culture of treating extenuating circumstances as “one off” events, with increasingly inappropriate sticking plasters for mass events or long-term conditions needs a fresh look. We can’t wait for OIA to do that in a couple of years after a mountain of casework.

In an ideal world, there would have been a much clearer steer from regulation throughout the UK on all this. The UK Standing Committee for Quality Assessment had a discussion on “no detriment” in February 2021, but who knows what came of it. OfS hasn’t said anything since January, and its review of “policies and practices to identify approaches that maintain rigour in assessment” is about that silly spelling story rather than no detriment. QAA did something early on in the pandemic, and is funding a number of projects looking at assessment through its collaborative enhancement fund. Advance HE don’t seem to have mentioned it. But nothing feels urgent, comprehensive or likely to be definitive.

What the sector needed, free from undue pressure over grade inflation regulation, was to be convened to work together to work out exactly what did happen in both summer 2020 and summer 2021 – using top statistical brains and clever policy people to gather the right data and ask the right questions of students and staff qualitatively – so lessons could be learned rapidly for decisions being made now.

I don’t want to be all “Mr Olden Days” about this, but it’s the sort of thing we might have expected HEFCE to do in the past. Surely in a period where regulation was relaxed, and monitoring on outcomes makes no sense and takes too long to show up, this is the sort of thing we might have expected from one of the national bodies that student fees are funding?

Instead OfS obsesses over unnecessarily collapsing the quality code and Mail on Sunday stories about inclusive assessment and spelling, universities are issued with confusing incompatible principles like “no increase in good honours” and “make sure students weren’t impacted unfairly” to juggle in the dark, and students again fall victim to a market whose rules and character of regulation mean that the way they are treated is either not fair, or doesn’t even look it when it is.

One response to “The debate over no detriment hasn’t gone away – it just got started

  1. This article poses a lot of questions, and a number of conclusions are drawn in the absence of evidence (having looked at my own provider’s data for last year and this, much doesn’t resonate with what I’ve seen, though student disatisfaction with measures does).

    In terms of the first, I think it is a failure of communtation, underpinned by a rational (even if seemingly incorrect from the data I’ve seen) student belief that the challenges of the last two years will have impacted grades, general disatisfaction with the student experience over the last two years, and made worse by the fact that every media article (at times, this included) seems geared to pit students against universities and assume malign motives on behalf of the latter (I’ve experienced none in this area – even the push to avoid (too much) grade inflation was driven first and foremost by concerns around the credibility of outcomes and potential impact on all students, rather than primarily the OfS’ threats). Losing the No Detriment approach (the guarantee that grades would not go down, and therefore the most reassuring point), made communication harder this last year.

    I imagine the RG position on No Detriment (poorly articulated as it was) wasn’t driven by a nefarious desire to pretend everything was fine, but by a combination of provisional institutional analysis showing No Detriment didn’t seem to be the most effective means of support (even if it was the most reassuring), knowledge that No Detriment would be much harder where everything in a year was affected (no before/after), concern that No Detriment X2 would lead to more significant grade inflation (and potentially in a more randomised way which didn’t effectively support students most affected), and belief that if one provider went for No Detriment student pressure to do the same elsewhere (even if it wasn’t going to address issues, and might mean different things in different providers) would be huge.

    My institution, and the others with which I’m familiar, ran with a combination of collective (eg review all assessment gradings to look for profile out of line with previous years) and individual (eg simpler mit circs) measures, specifically to try and minimise the chance that either cohorts or individuals were disadvantaged. We also aimed for a mix of things which would genuinely address issues, and things which would reassure students (and the most reassuring thing – based on feedback – had the least impact).

    For what it’s worth, provisional analysis of 2020-21 results at my place (unsurprisingly) shows different patterns to 2019-20, but so far suggests that students weren’t disadvantaged in exams collectively (outcomes remained high) or individually (eg attainment gaps are a little more volatile – some up some down – but are consistently better than those pre-pandemic), despite the absence of a No Detriment guarantee. We’ll need to look in *significantly* more detail over the coming weeks to identify what we need to do to inform policy for this year (just as we did last year), but I imagine we’re not the only institution doing this.

    It would be helpful if there was more sector work on this, but QAA don’t understand data and OfS seem more focussed (as you suggest) on responding to tabloid articles about the need to dock all students’ work for grammatical errors. The rapid timescales also make this a major challenge, as does the significant local variation (which is understandable given local variation in delivery, structure regulation – semesterisation is one example of a structural issue which made a huge difference to No Detriment policies, given the impact it had on assessment completed prior to March 2020).

Leave a Reply