With the calls for views to feed into Dame Shirley Pearce’s independent review closing soon, voices across the sector are lining up to present their priorities for the future development of TEF.
A range of views have been expressed about the purpose of TEF, the process for how it should be delivered, and its value (or lack thereof) for prospective students, institutions and the wider public. But one issue on which there is agreement is that the Government needs to seriously rethink its plans on extending the exercise to subject-level.
Statistical stumbling blocks
One of the main issues is that assessing individual courses or subjects is not viable as student numbers are too small to enable any meaningful analysis. But by aggregating students on different courses into large enough groups to enable comparison, the validity of the whole exercise is undermined. This means treating disparate groups of students as though they are the same even though they are studying courses in different departments and even different faculties, many of which do not share similar approaches to teaching and learning. This will lead to false comparisons being made, rendering a number of awards meaningless and misleading prospective applicants.
A number of other concerns covering the benchmarking methodology, metrics, and approach to inter-disciplinary provision are picked up in our submission and in a previous Wonkhe blog post, as well as by a range of other bodies (including UUK and the RSS).
Despite best efforts to overcome the statistical and other weaknesses inherent in the subject-level model through the pilot process, they have not been resolved. For this reason, the Russell Group’s submission to the review panel is calling for subject-level TEF to be dropped.
What next?
In our conversations with Dame Shirley so far, she has made one thing very clear: the review panel is looking for constructive feedback not just criticism. So, if subject-level TEF is not fit for purpose, what other solutions are there?
Supporting prospective applicants to navigate existing information
One of the Government’s primary motivations for rolling TEF out at subject-level was to provide information for applicants about the quality of provision on different courses. But it’s important to remember that there is already a wealth of existing information out there for prospective applicants to access. Indeed, previous evaluations of the Unistats website have found users expressed concerns about being overwhelmed by data.
So rather than producing another subject-level rating to sit alongside existing league tables, official datasets and information on universities’ own websites, a more effective mechanism is needed for filtering and making sense of all this data. A new interface could help users navigate through a range of sources and identify information relevant for them based on their own priorities.
Revamping provider-level TEF
Alongside this, we are urging the review panel to recommend a “revamp” of provider-level TEF. Whilst the provider-level process is subject to some of the same flaws as the subject-level assessments, the statistical issues are less acute and we believe the provider-level exercise has the potential to become a genuinely useful tool. But key reforms are needed to make this a reality.
Firstly, the way the creation of the initial hypothesis currently works means there is a perverse incentive for institutions to focus more effort on areas where they are almost meeting or are slightly above their benchmark (but not enough to get a positive flag), rather than on making improvements across the board and where they are weakest.
Replacing the overly simplistic medal system with a “profile approach” would provide more granular information about performance (both benchmarked and absolute) and better incentivise enhancement. It would also empower prospective students to identify areas of provision which are of most importance to them.
Secondly, one of the unintended consequences of TEF has been the damage it has done to the NSS. Student disengagement from the survey as a result of the initial link to tuition fees has meant some institutions now have no NSS metrics at all for the TEF4 exercise. It is important that institutions where there has been mass disengagement from the NSS are not penalised in the TEF assessment process, but further consideration is also needed about how student feedback can be effectively captured in future so universities can continue to use this to enhance their provision.
The UK has a world-leading higher education system, and the independent review provides us with a genuine opportunity to reform the TEF so that it is truly reflective of this. We think efforts would be best spent making provider-level TEF fit for purpose rather than pursuing a fatally flawed subject-level exercise.
The whole TEF shebang is not worth the candle anyway. You might recall that when first envisaged it was going to radically re-order the notion of what makes a ‘good university’ by rewarding teaching excellence – and it was going to feed fee market differentiation by creating a lower fee-cap for those institutions that only achieved Silver and Bronze. But neither of those things apply anymore: appeals against institution-level ratings; the removal of the fee differential; and the idea of subject-level ratings have all served to obfuscate the original intentions (alongside the obvious difficulties of defining teaching excellence). Basically it was a good idea that got Russelled.