Whatever we imagine the Teaching Excellence Framework (TEF) shows us, we are now able to see it.
As a way of celebrating the hard work of engaging with the process, and as a more oblique look at the quality of teaching and the student experience in every part of sector, the TEF as released is surprisingly limited. Seeing just the three metallic ratings – offering us the perfectly hideous “triple-gold” terminology – leaves a lot of questions unanswered.
In the spirit of public service, I’ve had a crack at answering them
If 23 per cent of students appealed their degree awards in a provider, the OfS would be straight in. What gives with the high number of appeals against TEF outcomes?
So the mood at OfS very much seems to be that a large number of appeals are testament to the seriousness with which providers take the TEF process. The impression is very much not one of a full on toys-out-of-the-pram meltdown over a poor rating, more a specific and limited appeal on a single aspect or part of a judgement. Indeed, we understand that the number of appeals based on the scary “needs improvement” rating is less than those with a bronze or silver.
A lot of this stuff may result in changes to the full panel judgement (which will be published in November) rather than the ratings themselves. There was basically only the opportunity to challenge the judgement based on the initial evidence submitted, or the factual accuracy of the judgement. Providers are clearly exercised about having everything that is published about them being accurate (for provider-specific values of accuracy).
That’s not to say a high rate of appeal doesn’t pose awkward questions about the sector’s confidence in the exercise and indeed the regulator, the value and consistency of panel judgements, or the independence of the published judgement. But OfS seems content to style these out for the moment. There are, indeed, a bunch of providers who appealed and did not get a change included in the results published.
Why such a link between established “prestige” and awards (no Bronze in the Russell Group, all Needs Improvement in FE and alternative)?
Not one single Russell Group university, at the time of writing, has received a bronze award. This could mean our selective friends are very good at TEFfing, it could be a strange statistical anomaly, it could even be that the conceptions of excellence that underpin the TEF are unconsciously founded on a set of quite traditional expectations. Maybe.
There’s really two factors at play here, and neither require a tin-foil hat. The first is that TEF is now largely a competitive bidding process (in that you write a submission to request a gold rating). Larger providers enter things like this all the time, and have professionalised bid writing teams, ad hoc committees and multiple levels of sign off and proofreading. In comparison a smaller provider may present a TEF submission written the evening before the deadline by a director of administration that is also the sole member of administrative staff.
The second is statistical – some parts of the TEF do refer to metrics, and if you have a large pool of students or graduates within those metrics things will tend to smooth out much more. For a smaller provider one single unhappy course will have a much greater overall impact. The now-forgotten subject-level TEF was an attempt to go some way to addressing this issue.
It’s also possible that students have a better experience in, and get better outcomes from larger and better known providers. The TEF explicitly does not demonstrate this and neither, frankly, do most other indicators unless tortured.
What if anything does this tell applicants about the quality of teaching they should expect as a provider?
A good TEF award for your provider has a number of meanings that we can be certain about:
- Your provider is good at writing submissions and bids and evidencing claims of excellence
- Your provider understands the currently fashionable language of teaching quality enhancement in a way familiar to assessors
- Historically, your provider has done well in the national student survey
- Historically, your provider has good continuation, completion, and progression metrics (based on the specific definitions of these three lifecycle metrics baked into the way the data is interpreted and analysed)
A good TEF award may indicate that a provider’s approach to teaching is organised and well-considered, and that it has a historic record of success.
Whether any of these indicate for certain that an applicant could expect to have an experience that meets their own individual needs is open for debate. Certainly nobody would expect anyone to decide on a course or provider based solely on TEF (or Discover Uni, or the prospectus, or what their friends say…).
Are this year’s TEF results comparable to the previous rounds (up to 2019)?
The methodology has changed to the extent that the judgements are not comparable. This was also true after 2017, although nobody seemed to care about this at the time. The current TEF places a lot more emphasis on the judgements of the panel and the cases made for excellence by providers and students than the earlier iterations – and that’s the position you will get from OfS.
However, it could fairly be argued that as both old and new TEF purport to measure the same thing (“teaching excellence”) that there could be a degree of read-across. If you feel like this would make sense in your context I’ve built you a simple dashboard.
TEF metrics are compared to a benchmark based on student characteristics – if this is the best way to find teaching excellence, why isn’t it the best way to identify teaching quality issues?
This is a huge issue and one without a satisfactory answer. It could be argued that excellent teaching will differ for different groups of students (different backgrounds, starting points, experience, expectations), but that all students should expect a baseline level of quality in teaching without exception – this is fair as it goes but there are some provider level benchmarks for some metrics that are below the numeric threshold used in B3.
As an example – the benchmark for progression (full time first degree) at University College Birmingham is 55.1 while the numeric threshold for progression (full time first degree) is 60. Despite undershooting this benchmark (43.4 – so technically a regulatory concern under condition B3). University College Birmingham (a genuinely great university that happens to specialise in catering – a field where many likely roles are not considered “skills”) received a silver TEF award overall and a silver for student outcomes. And if you’ve ever eaten there, you’ll know.
What we learn here is that TEF (and the idea of quality) is more nuanced than a numbers-based exercise could ever be and we always need to look at context. TEF and B3 are measuring different things – but it does still look odd.
What’s “educational gain” and when will it turn up in regulation?
Providers were asked to submit evidence of educational gain (sometimes known as “learning gain”) – an assessment of how much value students realised via studying at a provider. Because we don’t have any reliable metrics for this stuff it really is every university for itself – they were asked to define what educational gain might mean in their context, define how they measured this, and then present the appropriate evidence with a commentary.
OfS will be looking at these parts of the submissions very closely – it is hoped that over future iterations of TEF (remember, only once every four years) a usable national metric (or basket of metrics) will emerge that can be used more widely. However, this is a very long way off – having burned fingers on previous attempts to measure learning gain OfS is understandably keen not to dive in too quickly.
What might be the overall cost of TEF to the sector – centrally, provider based – and is it worth it?
This isn’t information that is currently available in the public domain (OfS will be evaluating in due course) – but a 44 member expert panel is expensive by default, writing and submitting submissions takes time and money (as does appealing). While it may be nice to have the excuse to paint stuff gold, there is a measurable financial cost to having a TEF – given the less than clear utility of the results this is something we need to keep in mind.
Why not publish panel comments and submissions now? The results on their own really tell us very little
This iteration of TEF is much more consensual than previously, with providers having a lot more say about what is published and when. The published summaries of panel statements will have less technical and procedural detail than the full statements providers have seen, and these need to be shared with providers for comment before publication.
Some submissions may contain information a provider is not happy to be made public (and needs to be redacted), some may have been found to be factually incorrect in one of the OfS’s random checks. Allowing the submissions to appear alongside the panel comments (which providers also have input into the wording of) makes sense on a procedural level. And clearly somebody needs to decant all of the submissions into PDF templates (so that anyone undertaking serious textual analysis will need to immediately convert and clean the data).
The panel comments are likely to be formulaic and will read like they were written by generative AI (to be clear they were not!). Previous iterations of TEF used a controlled vocabulary of phrases and terms that have to be calibrated across all panel comments – I would expect a similar approach. The providers submissions, meanwhile, will read like promotional literature but will offer insight into what constitutes cutting edge teaching practice – in another world Advance HE would do a great deep dive into this and spot all kinds of interesting stuff. The student submissions are likely to be the most varied of the three and the most interesting for the general reader.