Assessment and feedback is one of the toughest NSS nuts to crack.
Providers have tried all kinds of approaches to address this lingering issue – everything from assessment design to feedback policies.
I wondered if expert, experienced, academics may be a part of the answer. My thinking was that a higher level of subject knowledge, and a greater degree of experience in assessing undergraduates, would help students in getting the best possible feedback on their assessed work.
What I found was the opposite.
Counterintuitive
At a provider level there is a small, but notable, positive correlation between the proportion of academic staff on contracts linked to spine point 40 or less, and student satisfaction with the quality of assessment and feedback.
To put that more plainly, the more of your teaching academic staff who earn less than £48,841, the happier your students are with assessment and feedback.
To be clear, I would emphatically not recommend that vice chancellors and senior leaders determine their human resources strategies on the basis of this finding. This is a very mild correlation (r squared is 0.27, p is >0.0001) and there are a lot of other things going on under the hood.
For instance, the best way to provide good assessment and feedback is not to be a large member of the Russell Group – a long-standing finding that spans many areas of the NSS. Larger, more established, providers are more likely to have staff on higher-value contracts. As are providers in London (although, here, there is no impact on student perceptions on assessment and feedback).
It is also more than likely that the mix of subjects within a provider, and the backgrounds of students at a provider, would have an impact on perceptions of assessment and feedback.
However
Across all 27 main questions there are no instances where we see even a mildly (at p>0.0001) detrimental impact on the student experience with a high proportion of staff on contracts under spine point 40. The closest we come are very weak evidence of a trend for:
- Q3: How often is the course intellectually stimulating?
- Q20: How well have library resources supported your learning?
Neither of these meet even a very loosely drawn significance threshold.
In contrast we see very weak significance (at p>0.0001) in the other direction in a few areas outside of assessment and feedback:
- Q2: How often do teaching staff make the subject engaging?
- Q23: To what extent are students’ opinions about the course valued by staff?
- Q24: How clear is it that student feedback about the course is acted on?
- Q27: During your studies, how free did you feel to express your ideas, opinions, and beliefs?
Sex
Mentioning these findings to a now retired academic, she suggested an aspect that I hadn’t initially considered – female academics are generally considered better at assessment and feedback than their male colleagues. And – because there is a correlation (still!) between sex and salary, this could be distorting these findings.
First up, here’s the correlation between sex and salary (by default this is set to look at the whole sector, but you can see any individual provider)
It generally holds that women dominate the lower salary bands, while men are more likely to be paid more – and it gets very pronounced when we move past spine point 40 into where readers and professors would be found.
But when we do a similar plot to the one above (proportion of female academics – all salary bands, teaching related roles) we do not see the same level of significance for assessment and feedback. The only very mild, and unspectacular, correlation is with Q20 (on library resources).
Efficiency
Spine point 40, as I say, was £48,841 for this year of data. It’s comfortably above the national median salary (which was about £39,000), but for someone who has done a PhD and has a great deal of teaching and research experience it isn’t really very much. We are looking here at people who are unlikely to be a professor or a research superstar – at course leaders, assistant heads of department, principal lecturers.
If you are paid more than this, it is because you are quite good at parts of the academic role. University promotion criteria are notoriously stringent, academics are under a lot of pressure to publish, generate income, take on leadership roles, and perform well in the lecture theatre.
Given how valuable assessment and feedback is to the core of higher education – it is fair to say that my finding is surprising. I was genuinely expecting more experienced and more talented (and thus, you may think, better paid) staff to make a difference to this part of the student experience. Clearly they do, but not in the direction I expected.
In any kind of settlement that addresses the crippling financial status of many higher education providers, there is likely to be a quid pro quo on efficiencies. The last thing I want to see is senior staff numbers fall as collateral damage. But if we can’t demonstrate the benefit of better paid staff in student experience measures like the NSS – we have a problem.
Likely to be correlation but not causation? Feels like this is just showing what we already know, which is that the teaching focused universities are better at assessment and feedback (and often at teaching generally). Research intensive universities perform worse in the A&F measure but have proportionately more staff on more expensive contracts.
It would be interesting to know to what extent the correlation maps onto contract type. E.g. at some (most?) universities a lot of the UG teaching is delivered by PhDs and those on precarious contracts, likely under the £48k spine point identified here.
There is a contract type filter in the top visualisation (“academic employment function”). By default I use teaching only plus teaching&research contracts, but you can use any combination of contracts.
Pretty much confirms my hypothesis, from my student days 40 years ago, the higher Academic’s climb up the greasy pole, the narrower and more constrained they become by their role, and in their mindset, though there are exceptions. Above a certain point concerns about being honest with students takes second place to not offending them, especially now as an older person (they usually are on such high pay grades, but not exclusively) they are considered by many activists to be part of the pale, stale, male patriarchy thus are more vulnerable to pile-on campaigns should they utter an unpalatable truth to a student ‘customer’ who takes offence.
Any member of School-based Professional Services could have given this hypothesis. New academics are easy to mould, and can be eager to please. They are used to a culture of needing to perform in their career to achieve promotion, and better understand the needs of today’s student as it’s not been so long that they were one themselves. If you provide solid, templated guidance of best practice assessment and feedback it will be followed. The concern is with those who refuse to engage as ‘they’ve always done it that way’ and don’t see the impetus to change as their focus is on their research and/or retirement. This lends itself to those on higher pay grades with oft-competing priorities. When I used to chase marking, it was always our Dean who was last remaining on the list…
The question needs a multilevel statistical analysis to get to an answer, like most surveys the NSS has far to many sociological and cultural covariates that need to be factored in to any analysis.
It does. I’d love to see someone with the education research chops do it properly