This article is more than 1 year old

What TEF submissions told us about outcomes

Many see student outcomes as the core of TEF. David Kernohan, Michael Salmon, and James Bagshaw investigate whether the sector has moved on from claiming the wins and explaining the problems
This article is more than 1 year old

David Kernohan is Deputy Editor of Wonkhe


Michael Salmon is News Editor at Wonkhe


James is Operations Coordinator at Wonkhe

The presence of detailed and widely understood benchmarked metrics for continuation, completion, and progression (with subject and student characteristic splits) gives providers a clear structure to put together the student outcomes section of the statement.

Within this, the memories of previous TEF iterations mean that language is often tailored very close to the metrics, rather than the more general narrative we saw in other sections.

So where provision was below benchmark for a given group of students – or for the whole institution – responses in the narrative took one of two forms: explain the reason, or explain the action.

Actions and words

For the former, institutions would provide the panel with context that they felt it needed – often around the pandemic or the wider labour market, or reasons why the data did not really tell the true story for methodological or statistical reasons (particularly with small groups or non-standard programmes).

Putting plans in place to address the shortfalls was probably the strategy that found most favour with the panel. Sometimes this was just done in a common sense, “this kind of thing seems likely to work” way, with little description of the exact theory of change behind an intervention. But it was clear that surfacing the outcomes shortfalls for certain groups was at the very least drawing institutional attention to these groups, where it might otherwise not have been.

There was a read-across here to the access and participation plans – John Blake’s focus on evidence and theories of change has bled into expectations around other Office for Student programmes. We frequently saw explicit links drawn between the two, foregrounding the link between access, participation, achievement, and progression.

Numbers and narratives

Often the work to address below benchmark outcomes was planned in a way as to be measured, but work was still ongoing. The direction of travel over the four-year TEF period was often referenced here (with citation of metrics from – were outcomes on course to get back over benchmarks in the future.

The clearest examples of successful interventions, interestingly, were often at the subject level. Action had been taken by programme teams – to improve continuation or completion especially (particularly through better identification of “at-risk” individual students) – and the data was on hand to show the effect of these.

By far the strongest parts of submissions concerned progression. Nearly every provider had projects around career, employability, or work related learning to report on – primarily available to all students (though in providers with large numbers of students from ethnic minorities, we saw a few examples of targeted intervention). Again, subject based activity was also visible here on occasion – based around industry expectations. Creative arts subjects saw transferable skills and freelance activity covered.

Good outcomes or good practice

One of the criticisms of previous iterations of TEF was that if you had very good metrics there was little that needed to be added – illustrated, of course, by the famous story of the provider with the single line submission being very happy to accept a gold.

Though beyond the first full iteration this possibility was largely designed out, with a requirement that a full submission be made – the post-Pearce TEF, where half of the award is supposed to be based on the submissions should have killed off this particular style of coasting entirely.

When you look at the student outcomes at the end of submissions, however, things seem a little less clear cut.

Some providers, by dint of their intake characteristics, have sector-leading superb outcomes on continuation, completion, and progression. Students that are accepted generally have the academic experience and life situation that means they will get through the course and on to a good job with very little trouble. The question here is what could possibly be added under the banner of student outcomes (section three on the template) in the submission, other than the required speculative language on educational gain?

There is, in fact, a graph that could be plotted. Traditionally selective providers – with high outcomes and less than stellar NSS results – tended to focus most attention (and most words) on the student experience, while more inclusive recruiters tended to spend more time on outcomes while sailing through a much happier student experience section.

The dangers of success

Russell Group and similar providers were able to talk in general terms about opportunities offered to all students, because there are not significant issues with split outcomes metrics. Here. institution-level support (including online pre-enrolment provision, the configuration of the first year as a transitional year, and progress monitoring) benefited all students,but could not really be considered targeted interventions aimed at identified issues among a (subject or characteristic) group. At Oxbridge, beyond learning that certain colleges specialise in certain demographic groups (mature students, women) we don’t get much more than a repetition of the (excellent) statistics from the dashboard.

Other high performers link outcomes measures closely with access and participation measures. When you have above benchmark outcomes for all ethnicity groups, attributing this to a provider level Student Success Framework is fair, but for the curious reader looking for the magic formula the null hypothesis remains the link between outcomes and background.

In only a few cases there is evidence of a more nuanced approach talking about “all students with identified disadvantages”, but this is not generally linked to specific identified needs in each group or intersection. One provider did have, albeit very broadly focused, careers provision for BAME and LGBTQ+ students.

It is certainly possible that some providers had more work to do on the other side of the equation – nearly the entirety of many prestigious provider submissions are mapped to student experience measures – with the best that can be hoped for being a passing note that “we are pleased that our student outcomes indicators show performance at or above benchmark” before going on to discuss in detail a comparatively less stellar NSS performance, and some impressive work on employability and skills. It’s fair to boast that “more graduates who enter highly skilled employment” than the average UK higher education institution, but this just prompts questions about what else makes your graduates different from the average. This phenomenon isn’t necessarily a failing – split groups are generally substantially above benchmarks and this is rightly celebrated, but it does demonstrate how providers like this had less work to do on the outcomes end of things.

Designed in?

In a few cases, this can represent a shortcut to Gold. The OfS guidance provides that:

Where a provider’s benchmark for any indicator or split indicator is 95 per cent or higher, and the provider is not materially below its benchmark, the panel should interpret this initially as evidence of outstanding quality.

And if you are already of “outstanding quality” why would you take the risk of adding more information?

Here we are not suggesting that such providers benefit from artificially high ratings. If a part of the purpose of TEF is a cumulative opportunity to learn from the very best of sector practice, much of what our most successful providers do is simply not covered other than in very general terms. Clearly there is excellence around that goes beyond the rather cynical reading that “recruiting good students” is the magic ingredient to good outcomes, but it would be good to have proof.

The above article appears as part of a crowdsourced analysis of TEF submissions, panel statements, and student submissions in December 2023. The authors would like to thank the following colleagues who gave up their time voluntarily to take part in this exercise and supported the analysis of student outcomes: Alexander Bradley, Mohammad Noman Hossain, Adam Campbell, Anastasios Maragiannis, Gorana Misic, Joe Gould, James Wilcox, Becki Hamnett, Kimberlle John-Williams, John Parkin, Jonny Barnes, Araida Hidalgo, Mark Colpus, Lucy Wright, Charlotte Harrison, Ed Harris, Nasser Sherkat, Lynne Wrightson.

One response to “What TEF submissions told us about outcomes

  1. RA22 says explicitly that the OfS don’t need further evidence if student outcomes performance was materially above benchmark — “Within the student outcomes aspect, features SO2 and SO3 could be identified without necessarily requiring further evidence in the submission” (as opposed to NSS which is an “important but not direct measure”). Why would a provider waste valuable paragraph space on something they’ve been told is not needed?

Leave a Reply