Once I (eventually) got my hands on the full data, all I really wanted to do was to recreate the TEF3 initial hypotheses.
For those of you who do not carry a model of the incredible machine in your heads, this is the very first step in the generation of the final TEF award level. It is purely mechanistic at this point – a simple calculation based on the number of significance flags across the six core metrics (and, new for TEF3, a weighting against the NSS-derived metrics).
What the flags represent is the difference between the value of an institutional metric and the value it is benchmarked against. A double flag deviates up or down from the benchmark by 3%, a single flag by 2%. As we noted last year, 3% is the significance bar for the UK Key Performance Indicators (KPIs), from which the benchmarks are derived.
These are all in the full metrics data, available for any number of splits across full and part-time study. For this exercise I’ve used core flags for the institutions’ stated dominant mode of provision – which is what you would use in the initial hypothesis part of TEF.
What I built with it
Once I had these flags, I went in two directions.
First, I ran the TEF3 initial hypothesis rules to give me an implied score that could be viewed against the final award. This is a good way to show the value that the less mechanistic end of TEF offers – the differences between the two are based on the deep engagement of assessors and panel members with (sub-group) split and supplementary metrics, and on the close reading and discussion of the institutional statement.
The other thing I did was a bit cheekier – by simply subtracting (post-weighting) the number of negative flags from the number of positive flags you can generate a “flag score”. This is not a part of the TEF process, but it does offer a slightly more nuanced view of institutional performance against the core metrics than the two cliff edges we have in the TEF itself.
Those with long memories will remember that we also did something similar for TEF2 but – much like TEF2 and TEF3 – these are not comparable because of the addition of the weighting.
The results
What it might mean
I’d say that the differences between initial hypotheses and awards show a much more active panel this time round. There are now a lot more supplementary data sources built into the process, there’s the ability to take account of metric values that are high or low in absolute terms, and – frankly – fewer institutions involved would have given the panel more time to discuss each in detail.
There is, however, an unfortunate side effect. Where the panel effect works on traditional HEIs they move up through the award levels – bronze to silver, silver to gold. For FE colleges and alternative providers the movement tends overwhelmingly in the other direction. This did not happen to a single HEI.
It’s possible that contextual or split metrics are dragging less traditional providers down, it’s also a fair hypothesis to suggest a mutual failure in communication between what these institutions want to say, and what the panel members want to hear.
Last year – with a similar effect – we could dismiss this as teething troubles with a new and complex initiative. This year, with the scheme embedded and expectations established, we need to start asking the difficult questions.
The “flag scores” give you the detailed ranking that journalists wish the TEF was. It’s good to see a non-traditional provider – London Studio Centre – sit atop the pile, and they should be congratulated on the hard work that got them there.
And what can we say about Harper Adams? One of only two institutions to hold two gold TEF awards simultaneously (the other is the Royal Academy of Music). Not bad.
The other end of the table shows institutions who probably already knew they were looking at a bronze from the metrics when they were first provided. Their decision to enter TEF3 at all was, I think, a brave one – to demonstrate that they met all the quality requirements to be a UK HE provider yet to do so in a way that highlights the further work that they need to do.
Such bravery will perhaps not be seen again with TEF becoming compulsory for institutions wishing to join the register. Unless the statutory review scheduled for next year, or the post-18 review that will report during the process, shakes things up yet again…
Impressive work