Surveys are, by their very nature, samples.
Limited in time and coverage, for me a survey offers a path into understanding that needs to proceed via further investigation – more detailed qualitative work, or analysis of administrative data – before it can inform policy.
There are a lot of surveys in higher education policy and these vary in coverage and utility. Some are used directly in regulation, others find their way into political debates that inform regulatory action. Still others drive (and reflect) various popular narratives about universities and how we understand them.
When we see survey results there is a tendency to see them as data, not as a limited set of responses. Surveys in regulation, and indeed surveys in the wider policy debate, need to be caveated heavily – often we don’t get that in anything other than technical documents read by few and understood by fewer.
Surveys in regulation
If you asked a room full of vice chancellors to name the surveys that kept them awake, you’d quickly get a consensus on the National Student Survey (NSS) and Graduate Outcomes as the big two.
The recently revamped NSS features in the teaching excellence framework (TEF), while Graduate Outcomes data also crops in everything from B3 dashboards to the Proceed metric. These are population scale instruments used in regulation – but are either of them any good?
The NSS and Graduate Outcomes
Both these are population scale surveys – in the sense that they sample a significant proportion of the eligible population (the NSS response has dipped from the c75 per cent we saw pre-Covid but still hovers around a very nice 69 per cent, whereas Graduate Outcomes responses cover just over 50 per cent of UK domiciled graduates in each cohort). There is always the question as to whether these large samples are representative, and HESA has developed a very nice data quality report on the latter suggesting that the characteristics of the sample are an issue that is taken very seriously, and noting that “there is no evidence of measurable non-response bias in the data.”
In contrast, we don’t really know anything about NSS response bias – saving the suppression of results for groups with response rates below 50 per cent, which improves group analyses but doesn’t really do anything for top level results – and still leaves us with the fact that distance learners, and part-time students appear to be under-represented.
The recent review of NSS was sparked by a Department for Education suggestion that manipulation (“treating”, basically) and dumbing down of course content was rife – though we never saw any evidence of this. Indeed, in 2022, OfS reported that it “did not identify any cases in which inappropriate influence was likely to have a material impact on students’ responses” and the review of the NSS found that “the data does not provide evidence that the NSS causes grade inflation.” Though you don’t have to look very hard to find NSS critiques, quite a lot of them dissolve on examination.
With Graduate Outcomes the primary regulatory use is of destination data – and in a heavily abbreviated format. It’s worth going over how this works:
- A graduate returns their current job title in a free-text field, around 18 months after graduation
- This is matched, by hand, to a standard occupational code (SOC) by a consultancy employed by HESA for this purpose.
- The SOCs themselves are coded at a very broad resolution into “graduate” and “non-graduate” jobs (a practice that is dubious for a number of reasons familiar to Wonkhe readers)
- OfS uses this as a simple yes/no response to “does this person have a graduate job” for B3 and TEF purposes – with many other destinations (travelling, caring, retirement) coded positively.
Graduate Outcomes also asks a number of what could be seen as more pertinent questions about the experience of graduates, in terms of well-being, self-worth, and career satisfaction and it is fair to argue that such information may be of value in this context. It’s also important to note that this is a very early career snapshot.
We can reasonably question whether we should be regulating based on outcomes or reports of the student experience – certainly the conception of outcomes currently employed in England is limited, and at odds with the way students themselves see the outcomes (or indeed the purpose) of higher education. Using a survey means we are dealing at best with averages built on qualitative data, it’s a fair snapshot but is different in character from a more direct and specific student voice. Surveys are no substitute, but we should at least be happy that there is at least some (aggregate) student input.
HESA HE-BCI
The acknowledged masters of quantitative sector data did a survey? Yes – the higher education business and community interaction data collection included some survey elements – even free-text questions – until 2019-20. All this is under review and Part A (the survey bit) is suspended in the interim) but it’s worth mentioning here as HE-BCI contributes to the Knowledge Exchange Framework (KEF) methodology, so – technically – it is a survey used in regulation.
Census data
I can’t close this section without thinking about the actual 2021 Census. The sector and governments often use Census data to understand where students live – but this year’s iteration posed an issue. Many students were not living at their “term time address” during the Covid-19 restrictions. – and although students were asked to respond as if they were living at what would have been their term time address.
It’s not clear how much of an impact this has had – there were numerous mitigations in place, including direct communication (via providers and SUs), a specific student-focused media campaign, work with local authorities, surveys of individual halls, and comparisons with HESA and Home Office data.
This is an extreme example of a wider census issue – any survey is a snapshot of a single moment (as anyone who has ever attended a party on census night will know) and cannot be seen as representing “usual” situations or activity. There are some mitigations for this, but it’s a reason why census findings should only be a part of developing an understanding of the lives of individuals or groups.
Surveys in government
There’s a couple of important surveys conducted by the government that turn up in policymaking that it is worth noting the existence of.
Graduate Labour Market Survey
Graduate respondents to the ONS Labour Market Survey (LMS) see their responses contribute to the Graduate Labour Market Survey (GLMS). This offers data in aggregate of the experiences and salaries of graduates – if you see broad figures about a salary premium for graduates bandied about they may come from here. LMS and thus GLMS includes graduates (usually excluding those with postgraduate qualifications) aged between 16 and 64, though this is occasionally split to provide a “young graduate” sample of 16-30. LMS has a sample size of around 1,300 households – we are never told how many are used in the graduate variant.
Student Income and Expenditure Survey
Have you heard the one about the survey of 4,000 English students conducted in 2014-15 that still shapes policymaking about maintenance and student support? That’d be the Student Income and Expenditure Survey (SIES) – published in 2018 and still not updated (although presumably a version covering the golden years of the Theresa May administration will emerge at some point). So, for DfE, at least full-time students are spending £512 a year on direct course costs (books, computers, equipment) and £404 a year on travel. On average. Based on the 2,672 students that actually completed the expenditure diary. You know, we can probably do better than this.
Surveys in parliament
Though it isn’t a government survey, there is one annual publication that gets a lot of parliamentary attention.
HEPI/Advance HE Student Academic Experience Survey
Since 2006 a sample based survey has been providing insight on student attitudes concerning the issues of the day. The Student Academic Experience Survey (SAES) is the only sector level source of information on contact hours, and until very recently was the only way to understand student attitudes towards freedom of expression issues.
The sample is large (around 10,000), and though coverage is limited to full time undergraduate students a decent job is done on weighting the sample on gender, ethnicity, year of study, domicile, and type of school attended. We don’t know anything about non-response bias. The strength of the survey is its topicality – HEPI and Advance HE are able to add questions to respond to new political issues. A weakness with this approach is that question design is something that is hard to do well at speed, and I’m not sure to what extent cognitive testing (a process by which sharing questions with possible respondents is used to ensure questions are understood as intended) is carried out each year.
This issue is best illustrated by the famous “value for money” question. OfS would love to be able to ask directly about whether students think their courses are value for money, but it doesn’t because there’s never been a satisfactory single definition of what constitutes value for money. As questions that include definitions are generally considered to be clearer, we can speculate that students answer this question based on their own conception of value for money – which may mean that a student happy with their course would answer “no” because they feel fees of £9,250 a year are too high.
Specialist surveys
There are a number of very interesting agency-led surveys – Jisc’s Digital Experience Insights survey, AdvanceHE surveys of postgraduate taught and postgraduate research students, and the AdvanceHE student engagement survey.
Though these are all excellent instruments generating fascinating data only students of providers who chose (indeed, pay) to participate in these annual surveys are included. This means we can’t be sure data is representative of students over the whole sector, and we can’t make year on year comparisons. But these surveys (and similar standard surveys) can be very valuable for individual providers as a way of understanding – and ideally benchmarking (comparing their performance with similar providers) – their own activity in these areas.
There are other external instruments – and internally designed surveys – used in this way across the sector. They can be excellent (you often see them in research) but they lack a wider applicability when thinking about the sector.
Polling commissioned from professional pollsters
High quality commercial polling, carried out by reputable polling organisations and paid for (and designed with the support of) people who understand the sector are a great way of taking the temperature of higher education (and the public’s opinion of it) in a timely and robust way. These play a very important role in understanding public attitudes to sector issues, for instance on international students or freedom of speech. Good polling is expensive – rightly, it is a skilled job – but because of that you can be sure that the basics (sample composition, weighting, question design) are done well. That’s not to say these are always perfect, once in a while you do spot some howlers.
But surveys like this are a great way to get media coverage, especially where the findings either dovetail with or significantly deviate from one or more accepted narratives. The accepted wisdom of never asking a polling question where you don’t know the answer comes into play here, there’s been a few examples of people struggling to stay in control of the narrative. We should also note is that you often see smaller sample sizes (around 1,000) which means that you need to apply a larger margin of error (in which case 3 per cent) to findings – 50 per cent of a survey could realistically mean anything between 47 and 53 per cent of a population.
In the old days, these were telephone surveys – these days the majority use online panels, where people have signed up to answer polls and are chosen semi-randomly to complete particular surveys (in return for a very small fee). If you are thinking that signing up for something like that is slightly weird, it is likely that the kind of people who do this are not completely representative of the wider population – it is fair to suspect, for instance, that people signing up to online panels are more interested in current affairs than the majority.
“Secret” surveys
There’s some polls that we never get to see in full that have a huge impact on the sector.
OfS value for money
One example is OfS’ polling on value for money – though it features in KPM 9A we never see the full instrument, the weighting and sample composition, or the data tables. OfS notes that the polling is a subset of all undergraduate students (1,063) but we don’t know what measures have been taken to mitigate sample selection effects (for example weighting or targeting). We know the question asked was “Considering the costs and benefits of university, do you think it offers good value for money?” but we don’t know whether any definitions or examples were provided, or whether this was cognitively tested. We don’t even know the identity of the polling company involved (though we know this changed last year, meaning a possible change in methodology).
UCAS applicant survey
UCAS conducts an applicant survey alongside each cycle, and though we often get to see elements of this in UCAS publications there is never a full release. This may be for good reasons – it is sensitive personal data after all – but it also means that a potentially valuable dataset is not available to the sector. Many providers, and academics, carry out applicant surveys and pre-enrolment surveys (Michelle Morgan’s work is well known here) and this information can be used to meet student needs and expectations from day one. Though there may be issues with releasing the UCAS data, it represents a great lost treasure-trove of insight that could benefit students and providers.
Private surveys
There’s a whole range of surveys and survey-like objects that appear in Wonkhe’s inbox everyday. Some are hugely interesting and offer valuable insights (to give a handful of examples the WhatUni Student Choice stuff, polling by various organisations that work directly with students, and surveys of professional staff by their professional society), others are very poorly designed click-bait and can safely be assumed to tell us almost nothing (that horrible one about “sugar daddies” that comes round every year).
When we get these in we look for sensible sample design precautions and a good-sized sample, a well designed survey instrument, and appropriate analysis. The bar is high, but justifiably so.
An honourable mention – Wonkhe Belong
Our marvellous Wonkhe SUs team has been working with Cibyl and group of “pioneer” SUs on a new kind of monthly student survey that we launched formally this at the Secret Life of Students – designed to help SUs and their universities not just know students’ opinions, but to understand and learn from their lives too. We hope to combine the rigour of Cibyl’s research, and the analysis capacity of Wonkhe to provide a way to support SUs and the sector support and drive interventions in the student experience.
You may have spotted a few articles from my colleague Jim Dickinson drawing on this data already – expect more to come as the survey matures.
Survey questions
As we said at the top – surveys capture a point in time (repeated surveys – trackers – can present a series of these captures). Larger and more complex surveys take time to collect results, analyse, and present – this means we will often see data significantly after it is collected. We saw in the pandemic that survey results can quickly become out of date, and that contexts can shift rapidly. We often look at the date (or range of dates) fieldwork was carried out to understand what moment(s) in history we are seeing played out.
As a sector we under-use qualitative data. I’ve never seen, for instance, a national analysis of free text responses to the NSS, though it would be possible to publish something with appropriate controls at an aggregate basis. I don’t know whether this is because people who deal with these surveys are more comfortable with statistical methods than corpus analysis but there is an underused richness here that could offer a great deal of insight. I know the QAA have done work on qualitative methods in quality assurance in the past.
But fundamentally – the survey (although a well understood and widely used methodology) is not the only way to understand the student voice – representation and student involvement are key, and are arguably less prevalent at a national policy level now than ever before.