What a timely and valuable idea it was to bring in the Teaching Excellence Framework (TEF). Put forward to redress the long imbalance of attention in much of the university sector towards research rather than teaching quality; it recognised that students care far more the latter. It is students whose interests deserve far higher priority across the sector.
The departure three weeks ago of the creator and champion of TEF, universities minister Jo Johnson, is a moment for reflection, and an opportunity for his successor, Sam Gyimah, to reset the dials. Because let us make no mistake, TEF has lost its way. Right idea, flawed execution.
It went wrong at the outset. Jo Johnson was in a hurry to get on with it, commendable in a way because the assessment of teaching quality was long overdue. But its birth was overly hasty. Rather than taking time to learn from higher education abroad, where several other countries have sophisticated devices to assess teaching, it fell back on what the English sector already did. Which was not a lot.
Learning lessons
Universities overall, with some glowing exceptions which were not given enough attention, have too little systematic thinking about how to assess the quality of their teaching. Disappointingly, even though there were decades of learning to draw on from teaching quality in schools, there was little interest in doing so. Teaching sixth formers is not that different from teaching first or second year undergraduates. Why are universities, which should be championing learning from others, so reluctant to learn from schools? Too little was learned too from the often excellent work of the Higher Education Academy. The decision was made to rely on the National Student Survey (NSS) as the only metric in town.
The failure to build in peer review of teaching quality, as happens in universities in Norway and in Germany, was one of many lost opportunities. Embedding teaching observation and giving feedback is one of the best ways of learning how to lecture well, how to give effective supervisions, run seminars and to give practical sessions. The goal could – and should – have been for universities to become self-improving institutions, encouraged to develop their own distinctive teaching styles within schools and across the whole institution. In the UK, this enhancement agenda is strong in Scotland but neglected elsewhere. Students should have been given frameworks on how to assess the teaching they were receiving. The focus should have been on learning, rather than just teaching, and so much more as I argued in Teaching and Learning in British Universities.
Losing the way
Then, in autumn last year, an already flawed system was further weakened when the NSS component was significantly watered down, supposedly under pressure from the Russell Group. There was a case for doing this – the NSS is far from perfect – but it needed to be substituted by much more effective measures that would drive up the quality of teaching and learning. Do the designers understand that TEF needed to be about encouraging systemic improvement, not just measurement? Instead, we have a welter of flawed measurements which leave it unclear what TEF (or TEaSOF, the Teaching Excellence and Student Outcomes Framework as it is now rendered) is trying to achieve.
The new grade inflation metric fails to distinguish whether improvements in degree classifications are down to genuine rising attainment. The use of Longitudinal Education Outcomes is significantly undermined by problems associated with the accuracy and currency of data. There is so much more that is wrong with the changes.
This critique is not special pleading from the head of a spurned institution. My university has come top in the country by certain measures quality of teaching for the last three years. But I’m pained by the risk we might lose the opportunity for improving the quality of our teaching and learning, and I’m worried whether all vice chancellors get sufficiently well the importance of quality teaching and their own roles in driving improvement.
There may be hope, ahead
All is not lost. Subject-level TEF – while a welcome development – currently has the same problems with metrics as institution-level TEF, and needs to be refined before its full implementation.
The minister needs to ask himself two fundamental questions. What were the objectives of creating TEF in the first place? If REF were the dog’s breakfast as TEF has become, would anyone have respect for it? Peer review can still be added as a vital ingredient into TEF. The student voice should be increased and grounded in a deeper understanding of what they are assessing. Learning gain needs to be factored in; it is all about learning, stupid.
Vice chancellors are moral as well as pedagogical leaders, and our personal leadership is vital. As President Bok showed at Harvard, that is how we can drive up the quality of teaching and learning at our institutions.
Good luck Sam. Jo did half of the job. He needs you to finish it off.
I think that, inadvertently, you’ve made a strong case for scrapping the whole exercise.
I still think we miss a trick when we only get current student views – long term alumni feedback should be an integral aspect.
There is an implication that you can actually measure teaching excellence and am not sure that is really the case. The various metrics do not do that nor can they and by introducing them all that happens is that institutional behaviour is warped, hence grade inflation. Even something like learning gain will have that impact.
The measures in TEF measure student satisfaction and financial outcomes, not anything else
Here are some assertions. I don’t claim they’re self-evident, but I do believe they are well attested, and credible.
• It’s important to measure things you want to improve so that if you think something changes, you can have a reason for believing it really has changed.
• Feedback is the most effective teaching intervention. The natural way to learn is to try doing and see what works.
• Assessment for learning is a powerful teaching intervention.
• Testing for marks is pretty well the antithesis of assessment for learning. The more you do, the more you are likely to kill learning of anything except how to pass the test.
TEF is set up to measure for ranking. It has nothing to do with measuring to detect change in teaching quality. It is the opposite of assessment that would help anyone learn how to make teaching better. As long as competitive scoring is the core it will do the same harm to the Higher Education that the relentless testing of students has been doing to education more widely.