This article is more than 8 years old

Nothing either good or bad, but TEF can make it so

Are learning gain and student engagement the future of TEF? David Morris looks at HEFCE's work on learning gain, and the second year of HEA's UK Engagement Survey.
This article is more than 8 years old

David Morris is the Vice Chancellor's policy adviser at the University of Greenwich and former Deputy Editor of Wonkhe. He writes in a personal capacity.

To those more pessimistic about the direction that higher education policy is taking, metrics are to universities what Denmark is to Hamlet: a prison. Yet as Hamlet admits, Denmark is only a prison because he has thought his way into it, and perhaps so it is with universities and their metrics.

The first iteration of the TEF has fallen back on the ‘old unreliables’ of UK higher education performance management: student satisfaction and graduate employment. These metrics’ shortcomings have been well rehearsed, but they are used because there is little alternative. Yet as BIS has implicitly admitted on successive occasions in policy papers concerning the TEF, that may not continue to be the case. Two recent policy developments imported from America may be the future of the TEF infrastructure, or at the very least they may influence what does.

Learning gain

HEFCE’s £4 million investment in projects investigating learning gain might have the potential to overturn much conventional wisdom of higher education learning and teaching. The vast majority of studies into learning gain and teaching effectiveness have been conducted in an American context, and there have been few of much size or scope in the UK.

The centrepiece of HEFCE’s learning gain work is a mixed methodology project that will involve about 27,000 undergraduate students at about ten institutions. Students will be tested at various points throughout their undergraduate careers using three different measures: a problem solving and critical thinking test; a survey on attitudes and non-cognitive skills; and a survey of students’ engagement with their studies.

If the findings are at all similar to the US, then there may be some embarrassment ahead for UK universities. American studies into learning gain have found evidence of limited learning and progress made during a four-year course.

Yet there may be some obstacles to drawing firm conclusions through HEFCE’s projects, and the methods needed to assess learning gain might be blocked by the sector determined to protect ‘autonomy’ when it comes to evaluating standards. A report by RAND Europe, commissioned by HEFCE, found that there were five broad methods used to measure learning gain: student grades, student surveys, standardised tests, mixed methods, and qualitative methods.

For many, standardised tests are a complete non-starter for UK higher education; the thin end of the wedge. If qualitative evaluations of learning gain are unworkable for an exercise such as TEF, and grades are unworkable as long as the UK continues to use the current degree classification system, then it might fall to engagement surveys and self-reporting of skills as the most efficient and valid proxy for learning gain available.

UK Engagement Survey

The UK Engagement Survey (UKES), run by the HEA, is now in its second full year, and borrows heavily from the well-established and widely respected National Survey of Student Engagement (NSSE) in North America. Its results give us a clue as to what universities might expect from the new NSS this year, whose new engagement questions are inspired by the NSSE.

The survey creates a new proxy for learning gain by asking students to self-report their skills development, in writing, speaking, creative thinking, ‘exploring complex real-world problems’ and more.

This year’s findings show that UK higher education institutions are relatively successful at ensuring students are challenged on their courses and engaged in critical thinking, but less successful at ensuring students’ engage or interact with their teachers or peers. Students spend little time discussing their academic performance with their staff or engaging with them outside the classroom, even though US studies have shown these to be important ingredients for higher learning.

The report also shows that there is no shortage of independent learning across all disciplines and types of institution. The ability to learn independently scored highest on the list of skills students report having gained, while career skills and becoming an active and informed citizen scored lowest.

Engagement measures and other proxies: contact hours vs. independent study

The survey also finds that time spent in ‘contact hours’ does not appear to lead to higher levels of engagement. The findings from this year’s UKES have been seized upon to head off any suggestion that contact hours might be used themselves as a metric for TEF. Yet if ever there were a misleading dichotomy in the world of higher education policy then it might be that of “independent study” versus “contact hours”. The dichotomy shows just what a fine line the designers of the TEF may be treading as they consider future iterations of the exercise.

This way of thinking leads us down a dangerous path: either maximising independent study or contact hours and hoping that students learn more and feel they that have got good value for money. But well-established findings into effective pedagogy show us that there are good kinds and bad kinds of contact hours, and good kinds and bad kinds of independent study.

Independent study is effective when it is encouraged by a regular and aligned pattern of formative assessment with frequent and timely feedback that feeds forward into the next task. Independent study is less effective when it does not give students a chance to reflect upon and evaluate their work, and when it encourages students to “play it safe” by purely relying on summative assessments with limited or slow feedback

Contact hours are effective when they are the forum for student feedback, deliberation and debate. Contact hours should provoke students to reflect on their learning through interaction with teaching staff and with each other. Contact hours are much less effective when they are passive; when teachers merely convey meaning rather than allow students to create it themselves.

Or to put it more simply, regular formative assessment is much more effective than irregular summative assessment. And well-planned and engaging tutorials and seminars in small classes are much more effective than lectures. But the debate over independent study and contact hours is too often based on the less effective practices, primarily because it is much cheaper and less labour-intensive to deliver ‘independent study’ through irregular summative assessments and to deliver ‘contact hours’ through large lectures.

Make it so

All this goes to show what a fine line there is between a good TEF and a bad TEF. A good TEF, like any good accountability exercise based upon metrics, will create the right incentives for the right behaviours: effective contact hours and effective independent study. A bad TEF would incentivise the very opposite, and lead to university students being perennially bored by hours of passive lecturing and subsequently learning little from their frequent but unstructured summative assessments. In either case, the metrics and the method by which the metrics use will obviously matter, even with measures of learning gain or student engagement.

Both learning gain and UKES matter because both have a strong claim to forming a part of the TEF in future years, or at the very least, can inform the further proxy measures that might be introduced into the exercise. Great care will have to be taken in doing so, to avoid the future TEF and its metrics becoming Hamlet’s “confines, wards and dungeons” for higher education institutions and their teaching staff.

Leave a Reply