Yesterday the Department for Education and the Office for Students published a slew of documents on the development of subject-level Teaching Excellence Framework.
We all share the ambition to provide meaningful information for applicants and promote the enhancement of teaching and learning. But given the complexity and sheer scale of a subject-level exercise, can these ambitions be realised in practice?
Learning lessons
The government is making a number of revisions to the model for delivering subject-level TEF which are welcomed as both sensible and pragmatic.
Recognition that both models A and B are flawed is helpful. But a comprehensive model of assessment (with all subjects undergoing the same assessment process and avoiding aggregating subjects together for the purposes of peer review) will be more burdensome for providers.
To avoid spiralling costs which will need to be met, at least partially, from student fees it would be useful to consider ways of reducing the burden of the exercise. For example, aligning TEF assessments with accreditation by professional statutory and regulatory bodies (rather than simply including accreditation as additional evidence) could be appropriate in some circumstances.
Measures of teaching intensity do not provide insight into the quality of the contact hours students receive or the type of interaction a student has with an academic. It is therefore helpful that this is being dropped.
The decision by HESA to review the classification system (CAH2) used for subject-level TEF, and potentially to make the subject groupings more granular, should help ensure they are easier for prospective students and institutions to use.
Finally, the emphasis on student engagement is also welcome.
Ongoing concerns
There are, however, a number of areas where further work is needed to deliver a robust and credible subject-level TEF.
One challenge is ensuring the metrics data, which forms an important component of the assessment process, is strong enough to bear the weight placed on it.
Given the small numbers involved in assessments at subject level, there is a risk that outcomes could be determined by random year-on-year fluctuations as opposed to genuine variations in quality. Non-reportable data will also mean, in some cases, assessors will not have the data available to judge provision. In other cases, providers with some non-reportable metrics could receive the same award as those with the full suite of metrics.
There are no obvious means to fix these issues. Introducing a minimum cohort threshold of 20 students is welcome in order to address the problem with small numbers. But this will not altogether mitigate the risk that one or two students dropping out or making career decisions which do not reflect well in earnings data could skew the results.
Another way to address the flaws in the data methodology would be to place greater weight on the subject submission and the absolute performance of institutions to ensure a holistic assessment. This approach could strengthen the expert judgement element of the process and improve the quality of the assessments. It would also provide institutions with more scope to demonstrate how they deliver quality according to their own mission, rather than needing to focus primarily on mitigating issues with the benchmarked metrics performance.
Understanding the drivers of grade inflation and taking appropriate action to safeguard the quality and value of UK degrees is extremely important. Indeed the government’s consultation response quotes evidence from the Russell Group regarding the factors which affect trends in degree attainment including prior attainment at school, subject mix, student characteristics, improvements in teaching practice and student engagement, and so on (a point somewhat lost in yesterday’s tough-talking headlines). But it is difficult to see how simply providing assessors with information on trends in prior attainment will empower them to judge whether or not grades have been artificially inflated. More explanation is needed of how this will work in practice.
Back to the future
In order to remain relevant over time, subject-level TEF will need to recognise and respond to the changing nature of HE provision. One way in which TEF should be future-proofed is in appropriately rewarding interdisciplinary provision, that is, programmes that are delivered across disciplines like natural sciences and liberal arts degrees.
This is an area of growing student demand and many universities are planning to ramp up their provision to prepare their graduates for a rapidly changing labour market.
This is a tricky issue to address for an assessment framework entirely focused on providing information about quality in individual disciplines. The proposal to test an approach where students are counted pro rata in the subject-level metrics against each subject to which their course is mapped will mean some applicants will need to look at two or more subject-level TEF awards to try to gain an understanding of the experience they can expect on an interdisciplinary course. This is likely to present a confusing picture and may affect the demand for such courses in the long-run.
This issue must be explored further to ensure the TEF does not inadvertently end up stymying the growth of innovative new interdisciplinary programmes.
The right timing
Given the range of issues outlined above, it will be crucial that enough time is built in to test and refine any final subject-level TEF model with providers to ensure it is credible and fit for purpose.
Many institutions will also have an eye on how the timeframes for the implementation of subject-level TEF could align with the next REF cycle (with a deadline for research submissions at the end of 2020). We hope the government will avoid a potential clash in the TEF and REF cycles which could otherwise place a huge burden on academic and administrative teams within universities and present challenges for the resourcing of the REF and TEF panels.