During TEF’s long journey from manifesto promise to the publication of the first results over the summer and the imminent subject-level pilots, there has been intensive debate about what issues it should focus on, what the metrics it should include, how achievement in teaching excellence should be recognised, and how this should play into the fee system.
But when it comes down to the detail of how teaching excellence should be assessed, we don’t have a clear understanding of the range of student opinion. This doesn’t mean we have an absence of research on student views about their experience. We have data from the National Student Survey and complementary findings from studies such as the HEPI/HEA Student Academic Experience survey. But it does mean that we have had no clear insight into wider student views on the policy choices within the design of TEF – because these surveys don’t tend to ask about that.
But we do now. Over the summer, a consortium of over twenty students’ unions jointly commissioned and funded the largest research project to date on students’ views on how teaching excellence should be assessed, measured and recognised.
The research is based on a survey of 8,994 current students across 123 institutions, weighted for institution and gender, conducted by trendence UK. As the commissioning consortium isn’t a formal organisation, those coordinating the research have passed a summary to publish on Wonkhe. You can download the report in full here.
We have, of course, looked at the research ourselves. It is an effort to influence debates over the future development of the framework, especially in the context of the new Office for Students taking control of running the exercise and the impending independent TEF review. But some of the results are fascinating and raise significant challenges for lots of parts of the sector.
The results
The headline results suggest firm support among students for the notion that government should run an exercise to assess teaching excellence (84% agree), but much less support for the Gold/Silver/Bronze medals system of awards, and strong opposition to any fee link.
This is a challenge to Jo Johnson, who is personally highly committed to both the medal-based awards scheme and to making the fee link happen. The research shows how the first of these may be problematic in students’ eyes, with fears that it may impact on the perceived quality of degrees and on employment prospects, or simply that having Gold courses in Bronze institutions is very confusing.
The link to fees, meanwhile, has already been made harder by the newly announced freeze on the fee cap, and this combination of political and evidence-based interventions surely make it more and more likely to be dropped.
It’s apparent that there is no support among students for ideological opposition to the TEF as a whole. Indeed, there appears to be a real appetite for institutional performance to be measured, and for greater accountability.
The findings show that while 68% of students agree that universities should be held to account for teaching “not good enough to enable them to succeed”, only 34% agree they should be held to account if graduate jobs ratings are poor, and just 18% agree they should be held to account if students drop out.
So the research does challenge the current choice and balance of metrics in TEF. In particular, it seems that students think institutions should be judged far more heavily on measures of student satisfaction than on retention or graduate employment outcomes.
When students were asked which factors most demonstrate that a university has excellent teaching, the quality of the teaching and teachers themselves was the number one factor, while graduate employment came at the bottom of the list (7th).
This is a sharp challenge to DfE, which has reduced the weighting given to NSS in the most recent TEF specification at the some of the sector’s urging: this may have been a mistake. But it’s also an important challenge to the commentariat and its branding of students as ‘generation snowflake’ because it hints that they might well think that choices about whether to leave their course and what they do when they finish their course, are often really a matter of self-responsibility.
The Bronze effect
Perhaps the most immediate challenge for institutions is for those which have a Bronze award. The research indicates that half of all students would not have applied, or would have reconsidered applying, to an institution with a Bronze award. Let that sink in for a moment.
The finding may serve to reinforce other analysis that we have recently undertaken here at Wonkhe, for example, this piece by David Kernohan that has assessed early live data from prospective students using Hotcourses sites.
These messages might justifiably cause some real consternation in some parts of the sector if it seems that students are not buying into the official idea that Bronze represents “high quality, but not the highest quality”, and are actually taking it just to mean “probably not very good”. If students do end up turning away from institutions awarded Bronze in TEF in any serious numbers, it could be devastating for many universities, severely compounding an existing problem for those that have struggled to recruit over the last five years.
Not for the likes of us?
On the other end of the awards spectrum, institutions rated Gold may have trouble for different reasons. There is a specific finding that 6% of students (and it’s higher for BME students at 10%) would have at least reconsidered applying to their institution if it was rated Gold – the potential implication being that for some, the idea of a Gold university is “not for the likes of us”.
This possibility needs to be carefully and urgently interrogated to ensure there is no hidden threat here to widening access efforts.
Political challenges
The findings set up an interesting strategic question for NUS and its approach to TEF. It has recently supported the move to cut in half the NSS’s influence over the exercise. But the research published today could be read as a challenge to that position – the findings show that students are weary of using outcomes metrics to measure teaching quality. And by halving the NSS weighting, such metrics have now only grown in relative importance to TEF scores.
But the biggest challenge is to Michael Barber, Nicola Dandridge and the OfS board. They must already know that their headline student engagement initiative – a small student panel – lacks either the claim to democratic legitimacy that NUS can make or the claim to the research-based credibility that this consortium of students’ unions can make, based on the quality of this effort.
It’s clear that OfS must develop a comprehensive student engagement strategy, underpinned by a high-quality research programme of its own, going wider and deeper into the full range of student issues. For the time being, the challenges raised by this work cannot go unanswered if we are going to have the kind of Teaching Excellence Framework that students actually want.
You can read the report in full here.
This is really interesting research and a great analysis. However, based on the claim that “the quality of the teaching and teachers themselves was the number one factor”, it doesn’t follow that students would necessarily want the weight of NSS in TEF to be maintained or increased.
NSS may be a useful measure of student satisfaction, but it is not a measure of teaching quality or teacher quality. If it were, we probably wouldn’t have felt the need to create any TEF in the first place. Satisfaction is a function of what students expect to get and what they feel they have in fact received. If they expect less, but feel they get more than that, they will be satisfied, and vice versa.
A more direct proxy for the quality of teaching/teachers would be whether they are qualified to teach (which isn’t quite the same as ‘having a qualification to teach’). It’s still a proxy, but at least it’s clearly linked to teaching professionalism, it’s something that institutions can genuinely influence in a constructive way and it’s a set of data that exists already.
The Government’s refusal to use this in TEF, but rather look at outcomes only and mostly indirect ones, exposes the exercise as less to do with teaching excellence and more to do with trying to fabricate a spurious argument about value for money – defined in ludicrously narrow terms – as being the sole goal of HE.
Did the government refuse to use it, or was it that the data wasn’t comprehensive enough?
Not entirely surprised with the finding that some students would reconsider a university that obtained a Gold rating. At my low-participation comprehensive school, there was an obsession amongst most of the small number of students who did want to go to university about avoiding what were characterised as “boring” universities – those where students prioritised academic work over fun. I can see that such applicants would regard a “gold” TEF rating as a government-approved stamp of which universities were “boring”.
Does anyone have a link to original report please? Or at least the name? The link in the article is broken.
thank you.