While the headlines in higher education have moved to industrial action, vice chancellors’ expenses and the post-18 funding review, one theme – a source of great material for wonks over the last couple of years – rumbles on in the background.
The initial sting of the TEF results in June 2017 may now have worn off, and the results of TEF3 later this year will have less impact as fewer providers enter, but we now know that TEF isn’t going away. The Office for Students is clear about this: participation in the Teaching Excellence and Student Outcomes Framework will be a mandatory component of the registration conditions for English higher education providers. And just this week subject-level TEF is being (re)launched.
Any provider with more than 500 students will have to enter TEF to stay registered with OfS from 2019-20, and smaller providers can enter if they want. OfS sees TEF as “a sector-level intervention to promote excellence in teaching and outcomes beyond the minimum baseline.” In one of the more glib passages of the regulatory framework, OfS makes it clear that while TEF is seen as a tool for quality enhancement, “It is for an individual provider to decide whether or not it wishes to perform beyond its regulated minimum quality baseline in order to affect its TEF outcome.” What are providers doing to exceed this baseline?
Green shoots
There’s an increasing interest in institutional research, as Liz Austen recently wrote on Wonkhe, and there are great examples out there. At the planners’ annual jamboree, colleagues from the University of Greenwich presented a case study of using data to link students’ module evaluation surveys with learning analytics. The project was supported by HEFCE’s catalyst fund and conducted with Jisc and Achievability. The project drew on experience from Sheffield Hallam University where questions from the UK Engagement Survey had been seen to have a correlation – small, but statistically significant – with academic outcomes.
The Greenwich study considers the use of technology and constraints on students, such as the time taken to travel to campus. It found that student benefited from independent study, coursework assessment, and smaller class sizes. All could be expected, but the combination of data sources allows for interrogation and comparison of the effects. And, let’s not forget, correlation doesn’t always equal causation, but a data-led interrogation is a great start for building systematic enhancements to students’ academic experiences.
What’s this got to do with TEF?
The principle behind TEF is that – through benchmarking performance – individual providers will see their performance judged against others’, identify ways to improve, and through everyone taking a similar approach the overall quality will rise. Even though performance is judged in relative – rather than absolute – terms, the collective effect should be for the rising tide to lift all boats. The redoubled focus on metrics in assessment of performance has provided the imperative for innovation in data collection and interpretation.
Achievability, which provides module evaluation services (EvaSys), has developed module-level benchmarking to support this drive for data. It currently has more than eight million individual responses, comparing fifty-thousand modules across more than thirty institutions; modules are benchmarked individually – institutionally and against the sector – as well as data being aggregated by subject level. With subject-level TEF in pilot, it’s easy to see the use of this kind of dataset, where associations can be identified. And from those associations can lead interventions, given that the National Student Survey (on which much of TEF’s assessment is based) is an exit survey, those interventions need to be made before students leave to pick up their certificates. With a postgraduate NSS in development, command of the data and knowledge of benchmarked performance will become ever more useful.
What’s next?
The many projects questioning what works (not least HEFCE’s extensive learning gain activities and Jisc’s learning analytics) mean that it’s an exciting time in the world of student data. With OfS’s data-led approach, the debates about data – gathering, combining, manipulating and acting upon it – will only get more interesting.
This article draws on material presented at the annual conference of the Higher Education Strategic Planners Association (HESPA) at the University of Strathclyde, February 2018. Achievability/EvaSys and HESPA are Wonkhe partners.
Interesting article Ant, I do wonder however whether there is a tendency to research situations where you already expect a relationship to exist, and whether this sort of data confirms existing suspicions, or really reveals new insight. Perhaps we need to do more to encourage people to conduct research without hypothesis, to really embrace the message within?
But I was particularly intrigued by the paragraph below in the TEF section, and why your feel there is an expectation that everyone will take a similar approach?
Doesn’t this statement of conformity go against the diversity of the market ambition behind the OfS transition, or is this a tacit acceptance of the limited array of options open to providers having to operate within the restrictions of the quality code and revised registration conditions?
“The principle behind TEF is that – through benchmarking performance – individual providers will see their performance judged against others’, identify ways to improve, and through everyone taking a similar approach the overall quality will rise.”