Perhaps the most damning criticisms of the Teaching Excellence Framework (TEF) are that it has been more discussed than used and offers more burden than benefit.
As we look to the much delayed launch of the fifth iteration – delayed by a global pandemic, an independent review, a ridiculously long period of government inaction, a consultation, and a reputedly very well used appeals process – it is still difficult to figure out who the TEF is actually for.
Is it for applicants?
There are surveys that suggest that applicants like the idea of more information presented in an easy to use format (spoiler: surveys always say that applicants like the idea of more information presented in an easy to use format), but precious little evidence that applicants are actually using it. Indeed, for the last year or so, providers have been forbidden from talking about it.
Is it for the regulator?
There are no policy processes that depend on TEF results – the idea of a fee cap raise based on ratings feels like a ghost from another era. The Office for Students does have the opportunity to use a poor showing as an entry into the more punitive end of the regulatory framework, but as it can seemingly launch an investigation without even saying why this feels like a bit of a reach.
Is it for the institution?
Shirley Pearce’s review of TEF published in 2021 highlighted the benefits of the process as a formative reflective exercise, and a part of the original conception of TEF was to rebalance institutional attention towards teaching rather than research. There’s not really a way of measuring the success of this – although we do often hear from institutional learning and teaching specialists grateful for the attention – and it is fair to suggest that any worthwhile university shouldn’t need the promise of a gold sticker to spark these conversations with so much (internal and via the National Student Survey) student feedback around.
A manifesto
Initially, of course, it was for voters. The idea of a:
framework to recognise universities offering the highest teaching quality
first appeared as a throwaway line in the 2015 Conservative Manifesto, a document written largely by one Jo Johnson (later universities minister). Even by usual manifesto standards this was an odd document – received opinion suggested that the chances of a majority Conservative government able to enact a full manifesto were slim, and we would most likely be facing chaos (and £6,000 tuition fees) with Ed Miliband.
Nobody, in other words, expected this to go anywhere – so the appearance of a fleshed out framework in a green paper – Higher education: teaching excellence, social mobility and student choice – later in 2015 surprised many. Most of the features we know and love – metrics on graduate destinations, continuation, and student views via the NSS; provider submissions addressing the gaps in these metrics, an assessment panel, publication of a judgement – were already present. Missing, however, were the controversial Bronze, Silver, and Gold categories for gradations of excellence (these, allegedly inspired by the Olympics) appeared surprisingly late on.
The first two years
Year one of the TEF (or TEF1) was always going to be a bit different – even in the Green Paper it was clear that this first iteration was going to be based on pre-existing successful quality assurance reports only. As it happened it was the only iteration of TEF with an incentive, resulting in a £250 (inflationary) raise to the fee cap for successful providers. For this reason even such a simple yes-no assessment got complicated, with providers that did not have a satisfactory QA report allowed in late after having an action plan signed off.
The newly minted Office for Students, meanwhile, was thinking about future metrics – kicking off a programme (technically in the dying days of HEFCE) to examine the possibility of measuring “learning gain”. This attempt to measure the amount of value added via university study was a worthy research question, but a lot of work did not turn out anything useable.
The following year was probably the peak for media attention – certainly Wonkhe’s coverage was all encompassing, and the novelty of the awards and process caught the popular imagination in ways that later iterations did not. The old conception of the competition was for a rolling system based on provider decision making, sparking many discussions about strategic entry based on close monitoring of a number of metrics.
And as there were many universities more usually seen towards the top of league tables stuck with the lower “bronze” rating so we heard another round of methodological critiques – primarily focused on the historicity (some of the data used was a decade old) and the apparent weight placed on metrics rather than submissions. Those who found success laid the gold paint on thick around the campus, and there were some truly shocking press releases.
Lessons learned
Subject level TEF had always been an aspiration, and year three saw the commencement of a pilot programme that examined the feasibility of this idea. Even back in the Green paper, it was acknowledged that the utility of provider level measures was necessarily limited – given the focus on outcomes measures, and the way these are affected by subject spread, only a subject-focused exercise would (it was argued, based on student interest in subject areas rather than whole providers) offer a meaningful perspective.
Though this trial was positioned as the future, a lessons learned exercise offered changes for the present – with a reduction in the value of metrics based on the National Student Survey meaning a consequential rise in the value of continuation and graduate destination metrics. The whole name of the framework changed to the Teaching Excellence and Student Outcomes Framework (still TEF, for some reason). However, the design of the scheme meant that not every provider would enter the competition and try for a new award under the new rules – and OfS worked to preserve the nonsensical idea that the two schemes were equivalent.
As it turned out just 86 providers entered TEF3 (compared with 296 who were eligible), with 60 of these being repeat customers using the change in methodology to bump up their rating. With TEF awards generally lasting three years in those days, those who did well in TEF2 largely sat this one out.
TEF4 continued with the TEF3 methodology – with the marginal twist that everyone who wanted to register with OfS needed to hold a TEF award, rendering what had been sold to the sector as a voluntary exercise de facto compulsory . We were, by this point, looking at a long tail of providers – there were just 64 entrants, with most people happy to hang on to a decent result in TEF2 or TEF3. And there was no change in methodology to favour providers with historically less impressive NSS results. By far my favourite aspect of TEF4 was the early publication of data, meaning I could generate initial assumptions (thus annoying everyone) several months before the results were announced.
Year 2 of the Subject TEF pilot remains the only time we’ve tried to use longitudinal educational outcome (LEO) data in regulation, and the little discussed modifications to the main method saw more NSS emerge alongside it (grade inflation was tried and failed, much to the surprise of DfE). Just 50 providers took part, and whatever we learned from the experience (and the masses of submissions each provider had to make) resulted in the cancellation of subject TEF plans. In future, this will be a provider level exercise only.
TEF interregnum
And there we left it. Save a characteristically tin-eared note from Gavin Williamson asking for subject TEF to be implemented as soon as possible, all eyes turned to an independent report on the way the TEF works conducted by former Loughborough University vice chancellor Shirley Pearce. The existence of this report can be traced to a concession thrown to peers in the rush to get the Higher Education and Research Act through before the snap election in 2017.
Pearce delivered the report in 2019 – publication was originally slated for that year. We finally got to see it in January 2021 (the delay has never been explained – it certainly can’t be due to the need for a government response as we only got three pages). And it was excellent.
Going back to the point of TEF, Pearce came down very much on the side of it being a means to drive quality enhancement. There was no evidence that students or applicants had any interest in the ratings whatsoever. And the government almost agreed, but still pushed for informing applicants as a secondary purpose. In terms of methodology, Pearce pushed for a renewed emphasis on provider (and student) submissions – with half of the marks in each of the two new categories (education experience and education outcomes) drawn from the assessment of the statements by a panel.
There would be four possible award levels (Pearce recommended the removal of the medals), awarded both overall and for each of the subcategories. A supporting report from the Office for National Statistics was gently damning about the way the quantitative side of TEF worked and the sheer complexity of the thing.
New TEF
With the original TEF assessments a fair few years old (and based on even older data), OfS took the frankly hilarious decision to forbid providers from referring to their existing awards from
It would be comforting to relate that OfS took all of these criticisms on board in consulting on and developing NewTEF – and that the next iteration ran in 2022 as called for. The pandemic did get in the way a little, resulting in a delay to “early 2023” (hilariously this is still what it says on the OfS website), but the new design bore hints of ministerial intervention. Gold, Silver, and Bronze made an unwelcome return, alongside the mystifying “Requires Improvement” which apparently functions as an indicator that OfS’ standard regulatory monitoring of quality hasn’t worked properly.
The data – by now familiar from the data dashboard published back in September 2022- sees student experience measured by five themes from the NSS, with student outcomes encompassing continuation and completion (from HESA student data) and progression (from Graduate Outcomes). Each of these can be bolstered by student and provider (25 pages) submissions detailing all the many excellent facets of teaching and learning.
There’s a few shockers in the fine print. Providers get to decide whether or not to have apprenticeship provision, or validated provision, assessed, and subcontracted-in students (and TNE students) don’t get involved at all. TEF is now a four year process – no chance to opt out, no chance to improve marks in interim years (though OfS reserves the right to strip you of your medal if you have been naughty).
Providers first got sight of the results of panel deliberations in private last month – a metallic award for student experience, one for student outcomes, and an overall award. They had the opportunity to appeal against these – and it seems quite a few people did.
I’m not sure who – if anyone – is eagerly awaiting this year’s results. It all feels a little divorced from what students, staff, and providers are currently going through – and with providers already knowing the outcomes any positive impact on teaching and learning would have already been felt. But TEF has surprised us many times over its short but eventful life – perhaps it will surprise us again.