This article is more than 6 years old

Don’t students need a TEF of their own?

Jim Dickinson asks if TEF and their subject level pilots match OfS research on student demand for personalisation
This article is more than 6 years old

Jim is an Associate Editor at Wonkhe

Timing is everything. Just last Friday, the Office for Students published details of its information, advice and guidance strategy – with research and its resultant approach stressing the need for information for students to be as personalised as possible.

And now we have the Department for Education’s response to the consultation on the TEF at subject level. The question is – do the two hang together?

Sources of information

In the OfS work – which references the multitude of information sources available to students – TEF is posited as just one of many sources of rating information about providers and courses. There is naturally no direct criticism of the TEF (although we do get a retread of both Students’ Union Research Group/Trendence research that demonstrated that the metrics in the TEF are not necessarily those that students think are about teaching quality).

But there’s also a recognition that aggregation is an issue and a sense that big data and clever tools ought to be able to produce something more meaningful for individuals. So it is worth indulging in a thought experiment to see whether a TEF with metrics decided by DfE and then dumped on OfS match OfS’s own vision for personalised information.

Right now, the metrics in TEF are in three categories. Student satisfaction looks at how positive students are with their course, as measured by teaching quality and assessment and feedback responses to the NSS. Continuation includes the proportion of students that continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA). And employment outcomes measures what students do (and then earn) after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey – which will soon morph into Graduate Outcomes.

Imagination

Let’s imagine for a minute that one accepts that putting metrics into a mixer, adding benchmarking, contextual narrative and then boiling into a medal is OK. Let’s also imagine that doing so at institutional level involves so many different courses and student experiences that making these gongs available at course level is at least more meaningful in the student information space. And let’s ignore – just for a minute – the extra metric changes being put into the next round of subject-level pilots.

You may be a student for whom assessment and feedback and the views of your peers are bang on when assessing teaching quality. And you may care about employment outcomes more (doubly as much in fact), although you care about salary and graduate level jobs equally. In this scenario, the algorithm built into the design of TEF is perfect for you as an individual student. As long as you also like benchmarking.

It would then be even more perfect if the metrics were to morph to suit your desire to choose a university on the basis of other students’ perceptions of feedback responsiveness.

But imagine that graduate outcomes only matter half as much to you as assessment and feedback and the views of your peers on teaching. Or imagine that you care much more about a career where getting a graduate level job with a long ladder to salary prosperity than getting on the graduate level rung in the first place. Or imagine that you’re pretty self-sufficient when it comes to assessment and feedback, and that it’s the views of peers on teaching quality that matter more to you. In these scenarios the weighting involved in the creation of the algorithm inside the TEF is inherently faulty. What matters to you might produce different results.

Let your mind wander

Then let your mind wander further. Imagine that you’re a pretty self-sufficient learner and that resources in the library matter more to you than teaching quality. Imagine that you’re the type of learner that needs extra academic support, and that matters much more than the performance at the front of the lecture theatre. Or imagine that other types of outcome – wellbeing, broader satisfaction with the higher education experience all matter more for you. In these scenarios the aggregated metrics selected to be involved in the creation of the algorithm inside the TEF are faulty.

You can imagine further, of course. If you accept that data on outputs matters (not least because outcomes normally involve a shared responsibility), you could be an applicant who wants to judge providers only their side of the bargain. You might be an applicant that wants to see outcomes “raw”, not benchmarked against a particular set of groupings. You might be an application that wants to judge quality on all sorts of factors (some of which are measured now and some of which aren’t). On the subject you’re interested in, some factors in the NSS might make much more sense than others in judging teaching excellence. In fact, your personal preferences, situation and subject choice might mean that any pre-designed set of metrics and weighting don’t work. And if your course is being judged on a set of metrics, you’re certainly not going to want your provider overall to be judged on the same metrics. That would just confuse you, wouldn’t it?

Listening to students

On one level, adding in metrics on learning resources and student voice to the TEF subject level pilots is good news and demonstrates that the department is listening to students and reading the research into what they want. But that still involves tipping averages into an algorithm.

OfS is right. If done right, big data and personalisation – and a new Unistats – offer the ability for students to create their own versions of the TEF – with their own algorithms, their own weightings and their own medals, if they want them; all personalised to the sort of learner that you actually are, rather than the sort of student imagined by a team of DfE officials off the back of aggregated survey research data and press coverage.

Isn’t it therefore odd that the regulator’s sponsor department seems determined to press ahead with an approach which seems to offer precisely the opposite?

5 responses to “Don’t students need a TEF of their own?

  1. This is a really useful and important piece. I would like to share, in this context, some indirect feedback from a new first year student and family member about their university experience, which also I hope illustrates the case that this article is making.

    * The library is really difficult to navigate – book ordering is inconsistent between rooms, and looking for books evidently disrupts those who are trying to work. Retrieval of off-shelf books is slow, and notification sometimes happens and sometimes not. The book I wanted for my assignment still isn’t available, and I have to submit next week.

    * The timetable is designed to make most use of the room resource at a cost of learning. In one subject, a lecture happens at 5-6pm, and then the relevant seminar at 9 am the following day leaving no time to reflect on the lecture or follow up with personal study before the seminar.

    * Student residence facilities have been switch to an app operation as opposed to cash. While this might give some surveillance nerd a hard on, and save cash-management costs the actual facilities -eg printers and clothes dryers in the laundry – don’t actually work a lot of the time now, and learning time is wasted lugging wet washing, or seeking a working printer in another residence. Only you don’t have access to the building permissions on the system so you walk for 15 minutes to a place you can’t get in to.

    * An attraction of the university was the ability to choose module options from across the institution. However, there is simply a single list of hundreds of online titles, which is not searchable by prerequisites, timetable slot, department, and similar. So in fact it appears that the institution never really intended fulfil this offer – my informant says they were ‘mis-sold’.

    * Access to online virtual learning materials is switched on an off randomly, and academics are the most reliable route to getting this sorted, not the administrators whose role is to enable this.

    * They have realised pretty quickly that the best learning happens in the smallest lectures, student wise; so module selection is a consistent trade off between what you would like to do, for its content, with which has the smallest numbers

    These are exemplars. Now, I am an academic, so I would say this wouldn’t I. But as the column suggests, how is this learning (anti-learning ?) experience acknowledged, and who takes responsibility for it ? If I am to grind an axe more explicitly, it would be to ask where does the angst for this go ? At the moment, it goes into module evaluation and NSS-type surveys. And it feels to me, it is consequently academic lecturers who are consistently found at fault for any problems and anxiety, when often they are not of their making.

  2. Sorry, I missed a concluding paragraph.

    So, in terms of this article, the question becomes not only can metrics be dis-aggregated and extended; but can they be designed to be adapted and collected in the moment according to the specifics of students and institutions learning processes, and the drivers and restrainers thereof. And how can we ensure the validity of existing metrics are not cross-contaminated by student unhappiness over matters where there voice is ignored.

  3. [tongue/cheek]How about a Learning Excellence Framework which institutions can refer to to inform their choice of students?[/tongue/cheek]

  4. The Learning Excellence Framework is called “A Levels”. It’s not quite so useless as the TEF.

Leave a Reply