Early September is normally prime higher education conference season.
When the joint DfE and BEIS paper on ‘Reducing bureaucratic burden in research, innovation and higher education’ landed, I felt like I woke up after a late night spent ranting about the NSS at the conference bar, except someone had secretly written it all down and released it as a government policy.
The mandated “radical, root and branch review of the NSS” is the most eyebrow-raising in a bonfire of red tape and bureaucracy across higher education research and teaching. The tone and message for the NSS review are a massive departure from recent policy – which had focused on expanding the survey to undergraduate students every year and developing a version for postgraduate students too.
The OfS is in charge of the review, and there is a question about how thorough it will be. Over time the NSS grew well beyond its remit – informing prospective student choice – to also enhancing the student academic experience within institutions, and ensuring public accountability. Pruning will be no easy task.
Previous reviews
The NSS was launched in 2005, based on the Australian Course Experience Questionnaire (CEQ), developed by Paul Ramsden. The first review in 2010, led by Ramsden, was largely positive. Changes in the sector fast-tracked the next major review in 2014, again featuring Ramsden. This involved the collection of feedback from student and stakeholder surveys, interviews and focus groups, a thorough literature review, and subsequent cognitive testing and piloting of new questions for a revised survey launched in 2017.
It had a rocky landing, facing boycotts due to its inclusion in TEF and a (mooted) associated rise in tuition fees. The last review of the NSS showed it had varying effectiveness but there was little appetite across the sector for radical change. Despite few die-hard fans, the tenacity of the NSS was visible mainly in its stability and longevity, which is hard to balance with being “adapted and refined periodically to prevent gaming”. The sector had grown and developed around the NSS, and like the uncle no one really likes coming over for the holidays, you cannot imagine life without it.
Answering the essay question
The government publication sets out the parameters for the review. We know the NSS costs a lot to administer, particularly because of extensive harassment and follow-up phone calls needed to reach sufficient response rates, but its budget has never been clearly reported by HEFCE or OfS. Many institutions have several staff roles devoted solely to the NSS, as well as hiring consultancy services to boost scores, further adding to the costs.
Incentives are rife, and are used to greater or lesser extent across courses and institutions. including cash vouchers for completion, expensive prize draws, and luring students into survey completion parties with pizza, beer and branded cupcakes. Many lecturers remind students if they rate their course poorly they will devalue their degree. Some institutions focus efforts on promoting the survey to students in certain (highly satisfied) subjects. The last review highlighted the high degree of yea-saying, indicating questionable student engagement.
The bane of red tape is that it hinders innovation—a fair criticism for the NSS. Efforts such as the scaling up of the UK Engagement Survey, or the development of learning gain measures, could not overcome the positioning and power of the NSS. Within institutions, yes, you can design a challenging, rigorous and engaging course and have both high quality learning and satisfaction outcomes, but if you are short of time or resources it is much easier to settle for being entertaining and giving high grades. It has also held back pedagogical innovation as satisfaction measures expectations versus experience. Students often expect rote learning through lectures and standardised exams, variations from this are usually punished by poorer NSS scores.
Widely used
Like an invasion of knotweed, the 15-year tenure of the survey has led it to be enmeshed in every aspect of the UK higher education accountability, quality assurance and performance landscape. It runs across the whole of the UK, although of course the OfS review is only relevant for England. It is a core metric in the TEF, the Discover Uni website (yeah, me neither), regulatory processes, and institutional marketing. Within institutions raising NSS scores are often KPIs for senior management, or part of course review, or used in annual monitoring. Most institutions design internal course and module evaluation systems to provide an “early warning system” for the NSS.
The NSS is also a key feature of domestic league table rankings. The policy paper is critical of rankings, but it is debatable about how much they impact student decision making, especially in the way the DfE is concerned about. But nevertheless, the OfS is funding the data collection, which is used by the commercial rankings industry.
The student voice
A few commentators have warned “be careful what you wish for”. The NSS has led to improvements in the sector, albeit with increasingly marginal gains. Outcomes data, on salary and skilled employment, are most favoured to fill the gap of the NSS, though we know these are strongly associated with demographics and socio-economic status prior to entry, rather than the quality of the course. Without the NSS the TEF may become another bog-standard outcomes-based funding exercise, akin to state-level efforts in the US and Canada.
The review, and the post-Covid world, may just be the forest fire that the sector needs for new ideas to have a space to grow and the quality landscape to be reimagined. There will be a huge impact across the sector without the NSS as we know it—but that does not mean it cannot be for the better.
The best part of the NSS has been giving students a voice in policy. But does the NSS represent what matters to students today? Or is there a need for more and better data about the value of the education students are receiving, the quality of what students are learning and what they have gained from higher education?
Surely the main obstacle to developing learning gain measures is methodological.
The problem with the NSS is (was…) that those filling it in had completed their studies and were aware that marking down the brand on their cv was not a rational thing to do.
By the end of the course expectations have been moderated by experiences, both good and bad.
Without also accounting for non completion rates alongside NSS scores the full picture was not visible.
But let’s face it. Measuring quality in HE is a challenge when even defining it is highly contested and all primary stakeholders have vested interests.
I’d describe myself as healthily sceptical of the NSS and its limitations but it has certainly served to raise the importance of teaching and learning (and student voice) within institutional settings. Some good has come of it; assessment and feedback practices are generally much improved from a decade ago.
There will inevitably be a replacement tool, probably a survey. The challenge for educational researchers will be to find a methodology that is credible and measures something meaningful. The developers of the CEQ and NSS claimed a relationship between ‘deep learning’ and positive student perceptions of their experience. Engagement surveys like NSSE look at the link between self-reported engagement and learning outcomes. Designing a new method will take considerable time but regulators will almost certainly want to fill the data vacuum if the NSS 2020 was to be the last.
The learning gain pilor projects identified numerous ways to measure learning gain (by breaking the concept down and having multiple measures) but politically and practically the NSS was prioritised by institutions and the government.