This article is more than 7 years old

End of the peer show? Examining external examiners

Sue Rivers looks at where the external examination system could be improved, in order to reassure public confidence in higher education standards without compromising autonomy.
This article is more than 7 years old

Sue Rivers is an independent higher education consultant and a Principal Fellow of the Higher Education Academy.

Some years ago, I caused a stir when I first sought to play my part in the external examiner system. I advertised my availability, qualifications and motivation to take on the role on the Jisc external examiners’ email list.

According to ‘Disgusted of Poppleton’, who took no hesitation in selecting the ‘reply to the entire Universe’ option, I had besmirched the list by using it as if it were a type of Tinder for academics! However, as even one critic admits, there is some difference between the individual external examiners (EEs), who want to do their bit to enhance the student learning experience, and the system itself.

The external examiner system was once hailed as a “guardian of the reputation of UK higher education” by the DfES, but it was also criticised by a House of Commons Committee in 2009. There are two main areas of critique: professionalism and guarding standards. Given HEFCE’s recent proposals for reforming the EE system, I also want to give an update on the  Higher Education Academy’s project on the EE system and its pilot training for EEs.

Professionalism

If the role of EE ought to be professionalised, then the starting point must be admission to the profession. The advent of the Jisc email list (as it operates today) is a positive step towards more open recruitment. Despite this, the selection of EEs has been criticised for lack of transparency, such as appointing ‘people you know’ (recently dubbed a ‘chumocracy’) and for a strong tendency for those in charge of appointing EEs to do from institutions like their own from similar parts of the sector.

An Indicator in the UK Quality Code creates a person specification for EEs. A induction for new EEs is now compulsory in many higher education institutions. However, some inductions concentrate on the institution’s quality procedures and exam board systems rather than on the substance of the role itself. In terms of professional standards, individual EEs have varying experience of comparator institutions, which may affect (read: probably does affect) the accuracy and consistency of their judgements. There is an expectation in some institutions that staff will take on EE roles but there’s a lack of appropriate professional recognition and reward.

The Quality Code indicates that EEs should normally hold no more than two EE appointments for taught programmes/modules. This may prevent EEs accessing a wider comparator group and inadvertently discriminates against highly experienced people willing and able to take on more than two EE roles. For the individual, the role provides exposure to diverse practices and is generally seen as positive for personal development and career prospects. In some institutions, being an EE is considered mandatory for performance review and promotion, yet it is not always easy to obtain an EE post and so there are some people disadvantaged as a result. A recent advertisement for an EE post on the Jisc list in which prior EE experience was mandatory led some to ask: how are those without EE experience ever to get it?

Guarding standards

Each institution with degree-awarding powers is responsible for setting the standards for its awards, and ensuring that its graduates achieve those standards. According to the Quality Code, external examining is one of the principal means for maintaining academic standards within autonomous higher education providers. When I eventually did become an EE (by replying to an advert on the Jisc list), I was presented with a heap of cardboard boxes containing exam scripts to look at. One of the ‘clear fail’ scripts caught my attention immediately. It consisted entirely of one short paragraph of supremely spidery writing accompanied by three very large stains. Each stain was scrupulously circled in a different colour. The circles were labelled, respectively: ‘blood’, ‘sweat’ and ‘tears’! , I was pleased to confirm that this had indeed failed to address the question at hand.

One of the advantages of the EE system is that, as an external quality process, it is uniquely concerned with academic standards (measured by the output of student achievement) as well as quality standards (measured by input and focussed on other aspects of the assessment cycle). This is important because it is possible to have high quality inputs without this necessarily leading to good academic standards. However, one of the questions about the EE system is whether it is fit to do the job of maintaining academic standards. Students may not understand the EE role and, outside the sector, there may be an assumption in some quarters that EEs are the sole guardians of higher education standards. Any perceived weaknesses in the EE system could, therefore, impact strongly on public confidence in universities and higher education more broadly.

The Quality Code refers to ‘threshold standards’ (the minimum acceptable level of achievement that a student has to demonstrate to be eligible for an academic award), but there is a public (and even a sector) expectation that EEs should judge the quality and standards of a programme at one institution compared with ‘national’ standards across the sector. This is despite the fact that institutions often expect EEs to make comparative judgements based on their own experience of similar programmes, rather than on a wider basis.

An EE’s capacity to act as the ‘guardian’ of standards, such as to prevent grade inflation, is affected by the limits of their remit; for example, they may have little power to safeguard programme-level award standards where institutional algorithms are applied to affect students’ class of degrees. EEs themselves have commented that their role appears to be changing from ‘additional marker’ to ‘moderator’: they now look at samples of student work rather than all of the student assessment, and generally do not change individual marks.

The EE system should, in theory, provide many rich examples of good practice which EEs can take back to their home institutions to enhance students’ learning. However, a lack of clear systems and institutional imperatives for reporting and implementing this probably acts as an impediment to bridging the gap between theory and practice.

The HEFCE Review and the HEA Pilot

A HEFCE Review in 2015 concluded that the EE system should be retained but strengthened, and the role professionalised so that EEs would be better able to provide reliable judgements about the standards set by institutions and measure student achievement against them. HEFCE favoured establishing a mechanism by which subject examiners could compare students’ work and judge student achievement against the standards set, to improve comparability and consistency. There should also be research into classification algorithms, to determine a sensible range of ‘approved’ algorithms according to the desired outcomes.

Consequently, the HEA was awarded a contract by HEFCE (on behalf of the devolved administrations) to lead a project on degree standards which has two linked aspects:

  • Working with higher education providers to design and pilot generic professional development for external examiners, and
  • Exploring different calibration exercises with subject associations and Professional, Statutory and Regulatory Bodies (PSRBs).

The pilot EE Training Programme aims to ensure that EEs understand their guardianship role in national standards and to increase their understanding of calibration. It aims to enhance their knowledge of assessment, using key reference points. Participants (aspiring, new and experienced EEs) assess samples of anonymised student work and share their experiences of this process, together with their views on other authentic issues and scenarios, in workshops, although there is no formal assessment. It will be interesting to see how consistent their marking is and what can be learned from the pilot about institutional approaches to marking, for example, differences in applying deductions for poor writing, spelling, and grammar.

A way forward

If the EE role is to be professionalised, there must be a single, transparent recruitment process, so that entry to the profession is based on merit. There must be appropriate and consistent reward and recognition for the EE role, including published national fee scales which take into account the volume, level and complexity of the work undertaken.

The advent of an EE training programme is to be welcomed if it ensures that key principles of the role are understood by all EEs across the sector. If the HEA programme is to be adopted beyond the pilot phase it must be owned by the sector as a whole, whether or not the onus is on the home institution to train its staff to be EEs elsewhere.

A key output of the project should therefore be a sector-owned process of external examining staffed by professionals. The idea of training in subject-based calibration of standards is a critical factor and raises important questions about the reliability of calibration and the longevity of training in it, paving the way for EE professional development. At the moment we have institution-level regulations and it is possible that the result of subject calibration will reveal a need for subject-level regulations. On the other hand, this raises the risk that discipline-based calibration will emphasise disparities between disciplines.

If the external examiner system were to end, one possible replacement would be a Grade Point Average scheme in which individual Deans or Professors determine final marks. If so, this might result in grade inflation, as in the USA, which would surely threaten confidence in the credibility of UK higher education. Alternatively, if a national body was established to provide independent verification of standards, this would not square well with the notion of institutional autonomy.

So while the external examiner system may not be perfect, but it may well be better to mend the peer show rather than end it.

7 responses to “End of the peer show? Examining external examiners

  1. Thank you for this, Sue, always interested in anything on this topic! Some thoughts…

    “…and for a strong tendency for those in charge of appointing EEs to do from institutions like their own from similar parts of the sector.” Is it that way around? Or do potential examiners apply to institutions ‘like their own’?

    “However, one of the questions about the EE system is whether it is fit to do the job of maintaining academic standards.” Or, for HEFCE/HEA, is the question around the job of maintaining comparability of standards?

    “…to determine a sensible range of ‘approved’ algorithms according to the desired outcomes.” I think the work UUK/GuildHE is undertaking is more about description rather than prescription? I don’t think HEFCE requested algorithms be approved? That’d conflict with the idea of institutional autonomy, I think.

    “If the EE role is to be professionalised, there must be a single, transparent recruitment process, so that entry to the profession is based on merit.” This sounds something like the Register idea, Silver et al proposed this in 1995, in The External Examiner System: Possible Futures. Then Dearing, in 1997. Then the 2009 Select Committee. And then the 2011 Finch Review. And probably some I have missed – but, is this really what the sector wants? Or needs?

    “…The pilot EE Training Programme aims to ensure that EEs understand their guardianship role in national standards…” – I thought the HEA Pilot Course for External Examiners was excellent but it does need combining with institution-specific induction – Session 2 on variability in academic standards required participants to state if we agreed with the ‘fail’ or not. I wasn’t the only participant to point out that it might have been a fail on a 50% pass-mark but wasn’t for those of us with a 40% pass-mark on L7 provision!

    In 1996, Barnett noted that “… we have to doubt that the external examining system ever fulfilled the responsibilities placed on it. It appears likely that the idea was always a fiction; we just did not recognise it as such.” I think it’s more about amending the system (rather than mending)…

  2. Harvard, Yale and Stanford do just fine without external examiners. A largely redundant system.

Leave a Reply