Phase Two of OfS’ National Student Survey review has now been published with a chance to feed back.
Elsewhere on the site DK looks in detail at what’s in the review and what it all means. Here I’ve scratched up thirteen positive ideas for the future of the NSS, and would welcome more in the comments below. Some could even be taken up by the SFC or the new CTER in Wales if OfS isn’t interested.
- Every year hundreds of thousands of free-text comments are collected that are never analysed nationally. This was noticed a couple of years ago, but despite promises in the OfS business plan we never got to see the big themes that emerged from all those words typed. Wouldn’t that help us understand why some of the scores are the way they are? This should surely become a standard annual publication?
- Last time OfS’ Director of External Relations Conor Ryan wrote for Wonkhe, he set out a powerful case for inclusion of Postgraduate Taught students in the annual exercise. A pilot followed that then disappeared down the back of the sofa in Nicholson House, PGRs have never been mentioned, and this phase is silent on postgrads generally. Surely it’s time for the National Student Survey to actually survey all students?
- You can also make a case for a national version of the survey aimed at incoming students, and one aimed at students who drop out. Why is there no discission of these things?
- Despite the determination to have Q26 on students’ unions rewritten so it makes more sense, it still wouldn’t really make much sense to have a question on something that doesn’t exist in a lot of providers. Given the diversity in “who does what”, isn’t it time we just had general questions on collective student representation and its effectiveness both at course and institutional level?
- On a related theme, the NSS has a narrow focus (outside of the supplementary bank) on academic aspects – but wider issues of provision, the student experience and aspects like sport or careers support or facilities are also crucial, educational if not “academic”, are in the control of providers and are all “sold” to a student. It would be great to see NSS widening its focus.
- This fascinating national analysis shows sector-level NSS results split by six student and course characteristics, alongside calculated benchmark values. Why, for example, isn’t it straightforward to work out what the local version of the national gap on “staff are good at explaining things” is for Black students?
- For some reason that analysis above does not include gaps between home and international students, despite (for example) international students facing a 10 per cent attainment gap. Don’t we need that to be produced, wouldn’t it be useful to see it nationally, and surely it would be helpful to have that institutionally too in OfS’ proposed whizzy new tools?
- This phase offers no justification whatsoever for dropping a question on learning community (I feel part of a community of staff and students) despite its clear links to belonging and mental health. This should be kept, surely? And on mental health, if there is to be a focus on the academic experience, why on earth wouldn’t there be a question on whether the way the course is delivered makes mental health better or worse? Surely OfS understands the link between teaching and learning and wellbeing – or is it that it thinks mental health is somehow extracurricular?
- One of the lost opportunities in having a national survey is to determine the prevalence of an issue both locally and nationally. For example, surely we need to know about the prevalence of harassment and sexual misconduct – it would be very useful locally, there may be important equality gaps to consider and we’d know if OfS strategies were working.
- If we’re dropping “neither agree nor disagree” can we get some easy and whizzy tools that show us disagree rates when the dissemination results site is rebuilt please?
- Another major missed opportunity right now (and so source of extra polling costs) is the potential use of the survey to test contemporary attitudes or opinions on a range of issues that we might not need to know about at provider level yet. Read across from Gravity Assist, for example, where we could ask a quarter of all respondents about student access to the right kit and space, with other quarters asked about other important things?
- And why aren’t we asking students about value for money given it’s an OfS aim? Why will OfS’ look at VFM continue to be a poll from a sample rather than something we can look at by provider and programme?
- One way of looking at the NSS is that it’s basically a sector wide consensus statement on what makes a good (academic) student experience, like a bill of academic rights. If that’s the case shouldn’t we publish that to students at the start of their course as a set of expectations they should have, rather than just ask students about it at the end? Wouldn’t it be a great international recruitment tool for UK HE? Wouldn’t that help them understand what they can raise during their course? Shouldn’t it link to OfS’ “B” Quality and Standards definitions and the UK Quality Code? And shouldn’t we also take some steps to ask students which of the elements are more, less or not at all important to them so we can capture and understand the diversity of students and providers?
- The underpinning theory of the survey is that it can and should be used for enhancement. But don’t we need to know if that’s true, and if so, how? The smart thing would be for NSS results to be published, and then for there to be a period where providers are invited to reflect on the results, consult with students on why the scores are the way they are and produce a report and action plan based on them that includes an independent element from the SU – like in the TEF. OfS wouldn’t be checking on the conclusions and actions, but that the process is happening. And a national summary of that report would tell a great story about how the sector responds to student input.
- In Norway the NSS equivalent asks whether students are attending the study programme of their first choice, which seems sensible. It also manages to categorise and separate the “student environment” from the course itself which seems a sensible way to do things. It asks about relationships with staff and students, questions on students’ own motivation are fascinating, questions on workload have been working well for a few years now, and asking about skills is par for the course. In fact, can’t we just steal all of these questions?
Why don’t we have a measure that looks at attendance and feedback? We value student responses equally, regardless of how well engaged a student has been in their own learning. So, for example, I would argue that feedback from a student who has attended 90% of their course is more valuable than a student who only managed 50%.
This is an interesting suggestion… I may be a bit disconnected from the minutiae of the NSS (because I’m teaching overseas), but …there seem to be so many variables in this exercise that might disqualify the published results. How about the volume of students who responded (from, say, the Open University, with the biggest student registration, or the smallest – such as the U of Buckingham)? Both are reputable, by any standard, and both have made first place in the years since the Survey began.
What makes student responses authentic as well as truthful? Such a survey can easily become subjunctive, manipulated or simply unreliable. It might be a more valuable NSS if it focused entirely on teaching and learning. That is not to say other attributes and provisions are unimportant – such as sport, student support or career progression, but universities exist to do teaching and research – and to develop independent learners. Those are the simple areas that should be assessed.