This article is more than 6 years old

Learning gain still has potential

Corony Edwards, Carol Evans and Alex Forsythe ask what can be salvaged from the learning gain pilots
This article is more than 6 years old

Corony Edwards is a freelance higher education consultant


Carol Evans is Professor in Higher Education within Southampton Education School at the University of Southampton. She is co-director of the Centre for Higher Education at Southampton (CHES).


Alex Forsythe is a Senior Lecturer at University of Liverpool

In mid-September, the Office for Students quietly announced that the National Mixed Methods Learning Gain (NMMLG) project had been ditched.

It turns out that while the project team may have developed a good test for measuring changes in students’ knowledge, skills and values over time, the vast majority of the 27,000 students in the nine universities who were earmarked to participate in trialling the approach opted not to do so. The supposedly manageable 20-minute online test turned out to be too onerous, or perhaps just got lost in the noise of endless requests to fill in feedback surveys. Not only does that render the trial results meaningless, it also tells us that any attempt to measure students’ learning gain through the administration of a supplementary test or survey is probably doomed to failure.

It seems that the OfS is instead pinning its hopes on the original suite of  HEFCE/OfS-funded longitudinal pilot projects, launched in 2015, and now getting on with their work as the completion deadline looms. So at risk of generating another premature headline, can we see any signs that the £4m+ cost of the 13 projects (involving 70 providers) has been a worthwhile investment, or just a waste of resources?

Two key UK publications out this year are revealing on this point. First, Camille Kandiko Howson’s April 2018 interim Evaluation of HEFCE’s Learning Gain Pilot Projects, and second, the article Making Sense of Learning Gain in Higher Education, in the forthcoming special issue of Higher Education Pedagogies.

On the question of a single, standardised measure of learning gain, Kandiko Howson is clear:

“From the projects, there is no simple, ‘silver bullet’ metric that accurately and effectively measures student learning gain comparatively across all subjects of study and institutional types”

But she continues

the pilot projects are developing tools and approaches that have the potential to offer valid and robust accounts of learning gain, at least within specific institutional, subject and pedagogical circumstances, that are contextualised for use at appropriate levels.”

The need for better indicators to demonstrate excellence in teaching has arguably been the main driver behind government-funded learning gain developments, with excellence often linked with notions of student satisfaction and value for money. What we consider as valuable could encompass all manner of things: the development of engaged and responsible citizens; the nurturing and preservation of our cultural and creative heritage; the ability of our graduates to achieve personal fulfilment and happiness. But the political and funding climate inevitably directs us towards narrow economic conceptions of student outcomes. In practice this means full-time employment (preferably at graduate level) and high-level earnings (as the current TEF specification reminds us).

Some of the projects do explore other aspects of learning, but not on the basis of a consensus of what these aspects should be. Catch-all definitions of learning gain, such as those set out by the initial HEFCE/OfS initiatives, have not brought us any closer to arriving at a solution about what to measure, but the projects have at least stimulated a sector-wide cross-pollination of ideas.

Among the projects the development of new metrics on cognitive, affective, behavioural and metacognitive dimensions of learning are examined, with emphasis on both generic and discipline-specific approaches to learning. These approaches are multifaceted, leading to many conceptions of learning gain.  This suggests that cross-disciplinary working and combining useful approaches across projects could lead to enhanced understanding of how students develop as subject specialists, as well as accruing general knowledge, skills, and work-readiness. To achieve this we need to adopt a much more integrated approach from the outset, with experienced and expert researchers, practitioners, policymakers and professional services colleagues working collaboratively – something not necessarily evident in some of the work currently in progress.

The complex nature of learning gain as a construct is confounded both by terminology and operationalisation, including combinations such as distance travelled, value-added, specific and overall attainment.  If a broad, top-level ‘definition’ of learning gain exists, encompassing a potentially growing range of attributes, this only serves to confirm that any widespread attempt at measurement will be complicated, challenging and (presumably) costly.  This is almost certainly not what the HEFCE/OfS funders were hoping for.

While we are unable to agree on what to measure, it is also questionable whether students themselves would equate overall learning gain with value, whatever that might mean. A 2018 report commissioned by OfS and led by a consortium of Students’ Unions shows that measures of teaching quality such as fair assessment and feedback and learning resources are the ones that provide, in the opinion of students, demonstrable value for money. Students’ views on value for money are an important reminder that a focus on what The Establishment deems worthy of measurement may not provide the information that is felt to be relevant and valuable to either current students or prospective students seeking to make an informed choice about their future course and HEI.

Arguably, learning gain should not just consider student outcomes but can far more usefully be deployed to elucidate the teaching and learning process itself – to drive pedagogical enhancement, and contribute to improving the teaching quality that students tell us they value. We could refocus towards exploring how curriculum design and high impact pedagogies may impact students differentially, and which approaches to learning are most appropriate in specific contexts. In other words, we should focus primarily on how to provide optimum learning opportunities through adaptive pedagogies for every student, over seeking to simply measure learning outcomes.

In theory at least, these aims are not incompatible. Measuring learning gain has the potential to assess a whole range of aspects of teaching and learning quality if designed from the outset to do so. In her HEFCE/OfS projects report, Camille Kandiko Howson identifies nine levels of use of learning gain metrics – ranging from individual, prospective and current students, to teaching enhancement, institutional management and strategic enhancement, to employers and government.  In each case, the purpose for which the measure would be used implies a different focus or weighting for the range of data to be included, but that does not mean entirely different data sets or measurement methods must be employed. What we can’t do is design a measure with one purpose in mind and then retrospectively expect it be suitable for another, or we’ll just end up with another set of bad proxies.

So while it seems that the projects will turn out not to be quite all they were cracked up to be, we do at least have a clearer understanding of the complexities of the challenge.  And although the nature of the gains being investigated and the reasons for undertaking the measurement vary widely, at the most general level we seem also to have a degree of consensus that measuring learning gain(s) is not an entirely crackpot idea. We’re still a long way from the end of the learning gain rainbow, but at least we’re clearer about the pitfalls we need to avoid as we continue our quest.

2 responses to “Learning gain still has potential

  1. “What we consider as valuable could encompass all manner of things: the development of engaged and responsible citizens; the nurturing and preservation of our cultural and creative heritage; the ability of our graduates to achieve personal fulfilment and happiness. But the political and funding climate inevitably directs us towards narrow economic conceptions of student outcomes. In practice this means full-time employment (preferably at graduate level) and high-level earnings.” Love this. Don’t suppose you’ve expressed these sentiments in a peer-reviewed article? It would be helpful to cite this.

Leave a Reply