Ever since the introduction of the National Student Survey in 2005, the sector has been on the hunt for explanations, justifications, reasons or just plain excuses for performance.
In 2005, Nancy Rothwell, then acting VC at Manchester University, blamed how bright her students were – “students who go to universities like ours are more likely to be grade-A students with very high expectations”, she said. Rodney Eastwood, then director of policy and planning at Imperial, thought his students were bound to be critical – “our students are a questioning and demanding bunch, we would not expect uncritical feedback”. And Brian Roper, then VC of London Metropolitan University, thought it was about the Tube. “Things are different in London… building team spirit is very difficult”.
In particular, ever since the introduction of NSS, the sector has been on the hunt for explanations, justifications, reasons or just plain excuses for poor performance on “assessment and feedback” – especially now it’s a metric in TEF.
In 2007, Michael Arthur, then chair of the NSS steering group and vice-chancellor of Leeds University, thought it was about schools – “there is quite a significant difference between the type of feedback and assessment that occurs through early life and that at university”. David Eastwood, then chief executive of HEFCE, placed the blame on students – “the onus should be on [students] to go and get the feedback they need”. The message for fourteen years has been consistent – either the survey’s faulty, or student expectations are too high. Am I so out of touch? No, it’s the children who are wrong.
The frame game
Over the years though, the framing of this student blame game has shifted. Pre-NSS, student gripes about their assessment and feedback were dismissed as “nuisance”, or “naïve” – students as juniors, academics as masters. Then they were framed as “consumerism” – students “demanding” “easy” assessment in exchange for their fees when they ought to be “partners” in their education.. And now – when two of the top five national “active dissatisfaction” scores in NSS are still “criteria used in marking have been clear in advance” and “feedback on my work has been timely” – students are framed as snowflakes. They’re too “needy and demanding”, the Times went with recently.
Us “Mickey Mouse” media studies grads know all too well that outside of the big stories, most media coverage isn’t really about “news” at all – it’s about finding frames that generate the clicks, and then finding (often scraps of) evidence to fit those frames. That Times article (and follow ups like this) lifted a section of a compendium of comments from academics on student snowflakery, quoting a former chair of politics at Edinburgh University, who said that students in Mexico were “happier and demanded less feedback”.
“Take feedback”, he says. “Students want more of it, are unhappy with what they get, and seem to want to know – as if they’re baking a cake – exactly what steps they need to take to get a great result. When I was … at the University of Edinburgh, we worked hard to provide better guidance. I wrote blogs on the how and why of feedback and we met, with and without students, to try to improve its design. We tried to emulate best practice from elsewhere. Yet, each year, we sunk further in the feedback rankings”. And why might this be? “Perhaps they are simply a generation behind the UK. Perhaps they are culturally more reluctant to question authority. Or maybe they’re just happy”.
Complicated questions
One thing the sector’s not been short of over the years is explanations. I’ve read endless reports and papers and theories on NSS, and in particular on “Assessment and Feedback”, and he might be right.
Some studies suggest that written feedback as a “product” works less well than more personalised and human exchange/discussion – but who has the time to scale up those intensive pilots? Some studies show that inconsistency in assessment design, criteria presentation, feedback delivery, and timeliness generate dissatisfaction – but how do you deliver that kind of consistency across disciplines? Some studies trace everything back to assessment design and variations assessment element weighting – but what are universities supposed to do, moderate the setting of assessment?
Some have looked at feedback delivery, some have looked at subject choice and some have looked at academic background – “rote learners” are bound to find university assessment tricky. And some studies have looked at trade offs within module choice and degree algorithms once students become aware of them – but what one student finds “easy” can’t be standardised. Can it?
The clue is in the question
All of those things are probably true. But there might be simpler explanations. Maybe even now the criteria used in marking aren’t clear in advance to students. Maybe students don’t have confidence – for all sorts of good reasons – that their marking and assessment has been fair. It could be that despite the setting of targets for turnaround, feedback on their work hasn’t been timely. And maybe the comments they get do fall outside any reasonable definition of “helpful”.
Conversations with SU officers all summer suggest that the answers might be right under our nose. Perhaps universities could take deliberate steps to ensure – and I don’t mean burying a PDF on blackboard – that marking criteria are explicit. I’m sure universities are strenuous in their efforts to ensure that marking and assessment has been fair, but they could actually explain to students how that’s done. They could – like the train operating companies do – publish performance on turnaround time, and process-map where that’s going wrong. And just as universities do with students’ work, they could moderate academics’ feedback for its helpfulness, rather than just defining that helpfulness in blogs that no-one reads, and policies that no-one’s seen.
I’m not saying any of this is easy. But if it’s true that academics don’t really have the time to mark work and produce helpful feedback, SUs could pressure uni leaders need to work extra hard on workload modelling to fix the issue. If it’s the case that certain sorts of students are used to merely recalling facts, SUs could pressure providers should put in place proper support for those students they’ve been keen to recruit. And SUs should keep an eye on class sizes and departments that have ballooned in size without the facilities or academics to cope.
But most importantly, universities could start by assuming that UK students are indeed becoming more assertive, be proud of it, and work with it. However convenient it used to be, passive deference to academic authority is probably not a graduate attribute anyone keen to imbue in 2019.