Once you accept all of the problems, contradictions and inconsistencies inherent in giving every higher education provider one of four grades as both a signalling and enhancement tool, it makes perfect sense to want some of the evidence that you base that grade on to come from students.
In its last major iteration, the Teaching Excellence Framework (TEF) achieved that in two ways – it used some of a provider’s National Student Survey (NSS) scores, and providers were “encouraged” to involve students in the production of their narrative submission, and to show how they had done so.
That represented a significant downgrade from that seen in Quality Assurance Agency (QAA) led institutional review processes – which as far back as 2002 had included the opportunity for students to develop their own, independent Student Written Submission into the process – with plenty of evidence of impact.
So when Shirley Pearce’s review of the TEF kicked off in 2018, it was pretty much inevitable that we would end up with a call for students to be able to input in a way that was both more independent, and more current than the satisfaction and outcomes metrics driving at least part of the process.
DfE didn’t demur from that call in its response to the review, and then to OfS’ credit, that’s broadly what happened. Building on a notably successful round of inviting student submissions into evaluations of provider progress on Access and Participation Plans, the regulator proposed something similar for the TEF – a lead student contact for every provider would be able to choose to develop and submit their own report, offering “additional insights” to the TEF panel on what it is like to be a student at that provider, and what they gain from their experience.
It will take some time to properly evaluate whether that has worked – we don’t yet know how many providers’ student contacts uploaded something on January 24th 2023, we haven’t read them all, and we don’t yet have a clear sense of the way in which they will impact whether a provider gets Gold, Silver, Bronze or Requires Improvement.
But in many ways, it is that concern – the potential impact of students saying things out loud in a process where universities are judged in public – that is the one that requires surfacing, analysing and much further thought.
You can be amazing
Let’s start with some unalloyed positives. Since the initial announcement of the “return” of a proper student submission into this form of institutional review, I’ve been blown away by the creativity, vigour, patience and sheer graft of the student leaders and staff that support them in getting their student submissions over the line in the timeframe offered with the resources available.
As part of our work supporting SUs, I’ve seen a lot of project plans, drafts, presentations and final versions – and overall the latter paint a fascinating and positive picture of the student academic experience in higher education in the UK.
SUs have taken the exercise seriously, their evidence is in most cases impeccable, and the sector should be enormously proud of what’s been produced. Students really care about their education in this country – and when the submissions all get uploaded to DiscoverUni, it will show.
Crucially, in many cases both providers and students’ unions have been caused to think through the legitimacy of the collective student input that is offered into decision making more generally – and in plenty of cases have taken at least a temporary opportunity to resource it better, through support, access to data and modestly increased funding.
That OfS repeatedly dodged the opportunity to link the quality (or indeed existence) of a submission to its B2 (minimums) and SE7 (features of excellence) descriptions of student engagement was frustrating – but regardless, providers would be very silly indeed to treat whatever support was offered to the SU over the project as a one off rather than permanent improvement in relationships and resourcing. And those that didn’t still should.
This issue of capacity and resourcing – whether you’re talking about an individual student making a complaint, a course rep contributing to a committee, or an SU education officer developing a student submission – has been crucial. Not all of the providers whose student contact has declined to submit have the excuse of being small or running programmes whose intensity precludes student engagement – and anyway, they are just excuses.
In Norway, for example, not only must student bodies “be heard” in all questions concerning them, institutions must “provide conditions” in which student bodies are able to perform their functions in a “satisfactory manner”. OfS having the guts to say something similar itself – regardless of the size or programme portfolio of a provider – would really help. Your student panel doesn’t work for free, does it?
You can turn a phrase into a weapon or a drug
It’s not all been plain sailing. Some SUs found the timeline frustrating – with the bulk of the work having to be done in the most intense period of the year. Plenty found the guidance confusing – were they to explain and contextualise existing metrics, develop their own new evidence, or both? Were they to focus on the current year, all four years, or a fudge? Were franchise students to be included?
And was commenting on the way in which the wider student experience environment supports or harms outcomes banned, allowed or encouraged? It was never really as clear as it could have been.
Many of the frustrations we’ve picked up were probably as much about the design of the TEF and OfS’ wider regulatory interventions as they were about the student submission process per se. So while a performing arts provider of 100 students might have argued that no student leader could be found to develop a ten page submission (for free, and in their spare time), SUs with massive memberships were invited to submit broadly the same size and shape of document to explain the experience of a significantly more diverse set of students, courses and experiences.
The abandonment of subject-level TEF might make sense if you’re concerned about bureaucratic burden, but at least it would have brought a modicum of size and breadth comparability both to the submissions and eventual judgements.
Other issues will emerge in the wash. The behind the scenes conversations between SU and universities about educational gain will eventually manifest in an embarrassing array of different definitions and wobbly ways of measuring it that will make clear that leaving the issue up to providers to solve really hasn’t worked.
And the strange venn diagram of which students were to be included in submissions and which weren’t – both ignoring postgraduates altogether and allowing providers to side-line the experiences of those on TNE programmes or those contracted out will only exacerbate the sense that the resultant signal isn’t something that a student can rely on when making a choice.
Some of that could all have been fixed with some staggered cyclicality to the thing – doing everyone at once every four years mitigates against everyone embedding internal and annual versions of the exercise, stops SUs and universities from learning from each other, prevents OfS from iterating the process sensibly and causes everyone to assume that the rules will change next time around.
But as I signalled above – as well as all of the process and timing issues, and the subsets of wider problems with that “make a single judgement about providers regardless of size” thing, there is another important flaw in the regulatory design here.
You can be the outcast
Shirley Pearce’s review group was probably right to argue that the primary purpose of the framework should be to both identify excellence and encourage enhancement. And as such, the theory of change underpinning the exercise is that independent and honest feedback from informed and empowered student representatives will lead to reflection and then improvement.
The first problem with that is that the identification of excellence also means the identification of an absence of excellence – both a direct reputational and funding risk for the new “requires improvement” judgement, and a comparative one if the institution drops a medal. Cue defensiveness.
Imagine you’re a 21 year old student leader in your provider’s TEF working group, at an institution whose metrics suggest borderline between bronze and requires improvement. Once the FD has piped up to the remind the room that not getting Bronze would mean a budget cut of X next year and a whopping round of job cuts, even if nobody ever leans on you directly, you’re going to think twice before going big on those focus groups that you ran on dissatisfaction with assessment fairness or student upset at the lack of placement support.
And even if the numbers never lie, with the insider baseball privileges that you and your SU staff hold, you’ll certainly refrain from using the knowledge you have of the institution to point out where efforts to address the issues have not been made, or have faltered via poor management performance or weak leadership.
Even where the threat of tipping into “requires improvement” is distant, students tend to be proud of their university and tend to feel that students in many ways are the university – and so the last thing that most of them want is to feel that their representation might cause reputational harm to the people they were elected to serve.
So when you add to that the fine line (occasionally crossed) between offering a students’ union constructive feedback on its draft, and directly influencing the content with tacit threats to SU funding or doomsday scenarios of the collapse of the institution, even if you make someone sign to say that the submission has not been subject undue influence, the realities of the dynamics and the lack of release valves make that unavoidable.
Or be the backlash of somebody’s lack of love
The second issue is the nature of the judgement. On one level it might not matter hugely that students and their unions have little familiarity with a set of minimum quality conditions only finalised last May, and some descriptors of excellence only published in October – after all, the job of the student submission is only to say what it’s like to be a student somewhere, it’s the university that’s trying to get a good grade.
But on another level, the relationship between the B Conditions and this enhancement process has never been clear – especially if the B3 metrics indicate that OfS could, but hasn’t yet, resolved to formally judge the provider as operating below the minimum. There is pretty much no role for students to signal to OfS, either collectively or individually, that they think their provider is failing on the qualitative B conditions – not least because OfS takes no meaningful steps to tell students what they are, and omits even to use students in its assessment processes against the Bs.
And even on the TEF excellence features, it’s not unreasonable where students are asked for their evaluative feedback to take a little more time to explain to them what those features are – and if you want that feedback to be useful, it’s arguably essential.
But the third big problem is the conflation of doing formative and summative in a single, sprinty process.
Or you can start speaking up
One of the genius features of the old QAA process was that the Student Written Submission would be due in many months before the actual visit – allowing a bit of buffer for managers to go through the Kübler-Ross grief curve, sit down with students and agree working groups, investment and process improvements before the review team appeared with clipboards. That hasn’t really been possible in this process.
And even though this would be a jolly good idea, I doubt that OfS will consider requiring providers to demonstrate what they’ve done with the student feedback gleaned in a year or so, stripping providers of their award if the answer is “nowt”.
Overall, in our traditional conception of student representation all the way from individual complaint to university council, the way in which we expect students to participate depends on a kind of fearlessness – to have sometimes difficult conversations that won’t immediately result in danger or threat, both to the students giving the feedback, and the university officials receiving it.
Unless and until the regulator takes steps to recognise that, and thinks strategically about what might bolster the confidence of the students it expects to to engage, the honesty and efficacy of processes that depend on their input will fatally be driven not by the evidence, and not even by students’ capacity – but by the confidence imbued by their social class.
Yes to all of this – an accurate and insightful article.
It is hugely frustrating that student engagement practice/conception/understanding at the OFS and DFE still lags behind where the QAA, and to be fair HEFCE in its day and a fair number of providers and SUs have been over the years – as you say going back as far as 20 years in some instances (to the start of the SWS).