Polling and opinion research play a large role in policy decision-making, and for good reason.
Policy-makers and organisations – including universities – would be flying blind if they had no insight into what the people who would benefit (or suffer) as a consequence of their decisions thought.
Most polling is well intentioned. Some polls are accidentally of poor quality. Others are deliberately skewed to present information that suits the intentions of the survey writer.
Being able to spot good polls from bad is particularly important when it comes to student polling.
Reading polls
In a national poll you’ll find efforts to quantify society-level attitudes, balancing the sample by various demographics to ensure that the 1,000 people who are surveyed look like the 60 million-ish adults in the country. In the education policy world, this national sample often shrinks to a more select group; students, parents, under-18s, teachers or recent graduates in the whole UK (or often, given education is devolved, just of England).
Any samples which are constructed at the national level would contain very few students – In our nationally representative polls we typically would find fewer than 50 students in a sample of 1,000, far too small for specific analysis. And the smaller or more specialised the sample, the more risks to the quality of the polling.
With all this in mind, there’s three main pitfalls to be mindful of if you want to be able to spot particular poor polling in the wild.
Poor recruitment
All polling samples, to an extent, self-select. You can’t force someone to fill out a survey (much as some companies might like to), so every survey represents a group of people who wanted to answer a survey. This is particularly relevant for student polling – which has already cut the sample to a very small portion of UK/England society. Student recruitment therefore often requires an alternative approach to traditional national level polling, which can introduce these biases in a more pronounced way. When surveys are carried out through student networks, for example, you might end up with a sample that reflects those more engaged with the issues you are polling (maybe those who are more supportive, or those who are more opposed).
When assessing the results of any polling, it’s important therefore to check the demographic make-up of the whole sample. Does it lean heavily towards the South East and London? Or towards female students? Does it have appropriate representation of students from minority ethnic groups, or those from low-income backgrounds, or those with disabilities? If the sample is weighted, so respondents from underrepresented groups are counted more than once to bring the representation in line with expectations, it is worth checking how many times people had to be counted to get a representative sample. Where available, it is worth reading into the recruitment approach, to work out how close the sampling procedure is to completely randomly selecting students (which is what all representative polling should aim for).
Poor question design
It’s not too challenging to “push-poll”, using questions in a survey to find the opinions you’re after. A particularly egregious example might be:
If the government was to cut tuition fees it would cost the UK taxpayer £2 billion a year, which could otherwise be spent on funding the NHS. Would you support or oppose cutting tuition fees?
Instances of clearly leading question-design – luckily – tend to be rare.
More common is polling which neglects to put the key topic of interest among other priorities, or accidentally biases a participant’s response. Let’s say I was trying to make the case for greater university investment in student mental health services. If I were a dishonest pollster, I could ask:
Do you agree or disagree with the following: “Universities should do more to tackle mental ill-health among university students
This is the sort of question that would be pretty likely to receive a 75 per cent agreement rate or more. But it’s not a particularly useful question for a few reasons:
- I’ve provided no downside at all to saying yes. Even in the cases where your policy proposal has unquestionable moral basis, it’s good to at least make a show of having a counter argument. In this instance, you might mention how university budgets could be spent on other things like learning resources, or university facilities. It’s a useful test of a polling question to ask yourself why someone would have disagreed with it, or held the contrary position. If it’s hard to imagine why someone would disagree, then there’s probably no point in asking it.
- I’ve provided no details on how universities would “tackle mental ill-health”. A participant can readily fill in the blanks with their own view on the way that universities should tackle mental health. I’d likely get much less consistent agreement as soon as I bring in specifics – for example: universities should provide mandatory mental health courses for all students; universities should put bars over every student’s windows; every university should have an on-site counsellor. Some of these would poll well, some wouldn’t, but my original question formulation is perfectly consistent with all of them.
- Respondents generally lean towards agreeing with statements when you ask. People tend to agree rather than disagree if given the choice, so it’s best to blend in some forced-choice questions (“which of the following do you agree with more”), or some reverse-coded questions (“universities already do enough to tackle mental ill-health”).
- There is no surrounding context. Even if I play around with it a bit and make this question a bit better, I still have nothing to compare it to. For all I know, a far greater proportion could agree that universities should do more to tackle the cost of student accommodation.
Poor reporting and analysis
A mark of good polling is that you’re easily able to track down the full polling tables when opinion research is conducted. Public First, where I work, is a member of the British Polling Council (as are many other polling organisations) so it is required to publish the polling tables for any polling statistics which are made public. If you’re looking at a poll by any BPC member you should be able to trace it back to a full detailed polling table, which explains how the sample was recruited and provides the context surrounding the question being reported.
Not only does this let you check the sample – see point one – but also to consider the full context of the polling, rather than just what’s reported in the press release. Often the most interesting findings are not the ones to make the headlines; more nuanced questions just don’t lend themselves to punchy summaries in the same way. It’s also very possible to report statistics in a way which is misleading. If I was to find “40 per cent of final-year university students say that getting a degree is a waste of time”, it could well be a correct conclusion, but it certainly loses a bit of media zing if the proportion of first-year students who say the same is 40 per cent as well.
I don’t want to destroy all faith in polling; far from it. Public opinion matters more than ever – and plays a crucial role in ensuring that our representatives are representative. Hopefully though, the above gives you the tools to take a slightly more sceptical eye of each polling stat you see – and an understanding that not all polling is created equal.