Ok, it was the quiet news period between Christmas and New Year, but who had money on subject benchmark statements hitting the national headlines again?
This time it was the Telegraph rather than the Daily Mail, but again it was the draft benchmark statement for maths that led to the flurry of controversy.
According to the Telegraph QAA “advises universities on course standards”, which is closer to reality than much of the earlier coverage about benchmarks. So the Telegraph article is slightly more nuanced than those from earlier in the year (I’m sure taking its prompt from the Vicki Stott’s and David Kernohan’s Wonkhe articles on what the status and standing of QAA benchmarks actually is).
Still, it did set me wondering about the continuing currency of the mythology about benchmarks. And what interested me wasn’t that the national press doesn’t understand the nuances of OfS regulation, institutional autonomy and QAA guidance. It was that the press is writing up the views of colleagues from within the sector about the benchmarks.
How benchmarks are really used
I completely agree with Vicki’s and David’s descriptions of what the benchmark statements are and aren’t, and how they should be used. And about the significant value of embedding inclusive education in the new generation of benchmark statements, which Ailsa Crum has highlighted on Wonkhe. I’ve also seen time and again constructive engagement with the benchmarks as part of programme design processes, to the benefit of programmes and their students.
But I don’t think the criticisms from colleagues in universities (seen in the articles linked to, and also in the comments on the articles linked above) are all being made in bad faith. So where do they come from?
Sometimes we underestimate the half-life of things within higher education. Of course subject benchmark statements are reference points to support effective programme design. That isn’t how they started life though. When they were being developed in the late 1990s they were being badged (to quote Paul Greatrix’s book on the First Quality War, Dangerous Medicine) as “broadly prescriptive”.
Of course QAA quickly (and rightly) moved away from this, clearly stating from around 2001 that benchmarks were reference points. However, some colleagues perhaps have long memories. Others don’t, but it’s still possible that the idea of benchmarks as requirements has become embedded in some institutional and/or departmental cultures. So while it seems a long time ago, there’s perhaps a bit of this still in play.
And there’s also how universities have treated benchmarks. In the time I’ve worked in higher education I’ve not been aware of any universities that treated subject benchmarks as setting out requirements that must be complied with. But I have seen instances where the use of benchmarks in programme design has almost shaded into a “comply or explain why you don’t” approach. Of course some colleagues might interpret this as an implicit requirement to comply. More likely though is that academics under huge workload pressures see a comply or explain approach (or something they feel looks like this), and think that just complying is the most effective use of incredibly scarce time.
We also need to think about how subject benchmarks get used within academic departments. I’ve seen an instance where a head of department blatantly distorted the status of the benchmarks to try to impose an approach to programmes on a department, against legitimate questions and concerns from colleagues.
Political documents
This highlights something that we don’t always acknowledge enough. Whatever the formal status of benchmarks in practice they can be political documents, used in ways that were not intended. And this isn’t always the “bad” institution misbehaving towards academic departments and colleagues. I’ve seen two examples where departments have used subject benchmarks as a specious justification for curriculum overhauls, which in reality were primarily about freeing up time for staff research by reducing contact hours and student module choice.
Now, I know that this doesn’t reflect the reality of what subject benchmarks are intended to be, or what they are, in many institutions. And I agree with other writers that, if used properly, subject benchmarks are potentially even more valuable now to support meeting the new OfS B conditions. But it might suggest where at least some of the current criticism of benchmarks, from some colleagues in the sector, may be coming from.
And the misunderstandings might be more widespread than we think. In seven years delivering development sessions for new programme directors, I always emphasised the status (as well as the value) of benchmark statements as reference points not requirements. It frequently struck me many pairs of raised eyebrows were raised when I said this. All of which reinforces the importance of those of us supporting development, review and improvement of the programmes, continuing to emphasise what benchmarks are and what they aren’t and the value they can bring when properly used. I also wonder though if it sheds a little light on one of the more perplexing elements of the new OfS B conditions.
Many of us in the sector have been surprised at the strength of criticism from OfS towards the UK Quality Code and other established quality and standards reference points (the injunctions to abandon many aspects of established QA approaches have at times felt like a model of the parable of the scorpion and the frog). There were also scratched heads from many when OfS’s response to the consultation on the new B conditions included the claim that “providers should note that there are likely to be some parts of the Code which would lead to practices that we would consider non-compliant with our regulatory requirements”.
Perhaps at least part of the answer lies in the multiple and imperfect ways in which the existing, established quality and standards requirements (still in place of course in three of the four constituent jurisdictions of the UK) such as subject benchmarks have been understood.
Apparently, compliance with QAA benchmarks is written into some (and possibly most) university policies, promises to students, etc. So any time they are changed, the university has to scramble to make sure the benchmarks are still complied with in order to avoid liability elsewhere in the system (e.g., the liability of not providing students with the level of service already promised to them).
That’s interesting Erin. For most benchmark statements there isn’t a compliance requirement (there’s a small number of exceptions where a benchmark statement is *also* the requirements of a professional body for accreditation, so that the benchmark is a requirement for an accredited programme). So at the universities I’ve worked at there was no expectation or requirement of compliance with a benchmark statement. The expectation was that a programme team would have used them as a reference point (i.e. be aware of them, reflect on them, take account of them as they judged appropriate), but not as a compliance tool.
Agree Richard – and as I always say to colleagues in universities, we make the policies – so if they don’t work, we can change them!
I think what’s at the bottom of this can be explained by my favourite sociology quotations which has really stood the test of time: ‘If men [sic] define situations as real, they are real in their consequences’ (Thomas & Thomas, 1928).
So put simply, the interpretation of a situation leads to perceptions, actions and new realities. We can intend for things (including our policies) to lead to desired outcomes, but situational interpretation will always impact on this. Culture eats strategy for breakfast, as they say!
I just got this information from administrators on the ground in universities and thought I should pass it on, because it seemed like a very plausible (and reasonable) explanation for why some people are getting so upset by the changes. (I haven’t actually read any policies that referenced the QAA benchmarks, myself, however.)
More generally, if this is indeed an issue, it might make sense in the future if QAA released finalised benchmark statements 2–3 years before they actually are officially adopted so that universities have more time to adjust to them? (If they don’t already do that, that is.)
But there is no external requirement to comply with them. If there are universities which have made compliance a mandatory part of their student contract (which would be … odd, from my perspective, as someone who has written large parts of one HEI’s current student contract), then the only solution would be for the universities which have chosen to do that, and who wish to continue to do that, to write a 2-3 year grace period into their contract, thereby at least partially addressing the problem which they have created.
Alternatively, I’d agree with the article and Liz. I provide advice on education regulations; 90% of that part of my job when explaining the regulations is dispelling institutional myths – the answer to the question of “Why can’t I do X” is normally “But you can” (though sometimes it isn’t a great idea, or will have consequences, or most frequently there are likely better ways of achieving the desired end point).
The question, Richard, what are we benchmarking against? And what dimensions and contexts influence the experiences and outcomes. While compliance with the QAA expects benchmarking against their published subjects statements that have been developed with experts in the fields as a starting point, and the code also expects an overall current statement for benchmarking against external expertise – external examiners and programmes in comparator institutions ‘who are we comparing ourselves to’ apple-to-apple. Still, there is also our vision of ‘who we aim to be like?’ when talking to institutions or exploring their offer, some seem to lend themselves o knowledge exchange and some don’t. Not to overlook, what are we benchmarking against? There is a growing emergence of using industry as a total quality management and innovation trailblazer while it has been the opposite at some point in history.
I think this is a really important point, Richard. There’s an awful lot of myth surrounding quality regulation, some of which I think mischievous quality professionals may have encouraged back in the day in an attempt to corral academic staff. it was never a good idea, and this is the consequence! Things change, and quality professionals change, but the myth lives on. I wonder how we can frame this debate better in future… sounds like a smashing topic for QSN…
I was once asked by a colleague how often QAA required us to undertake annual review. After a pause, my response was that what anyone external thought wasn’t the issue. It was just effective academic/professional practice for a programme team/department to pause, reflect on what had gone well, what had gone less well and what they wanted to do better in respect of their programme(s) and that was what our monitoring process (which was annual) was and should be about. It’s harder, but ultimately more productive in the long run, to make the case for these kind of activities from first principles rather than rely on an external requirement.