The thing that surprises me most about the findings of a new Research England-funded report into academic research culture is how consistent they are.
For the report, researchers at the University of Bristol surveyed 406 researchers around the world, and interviewed ten in depth. Across different career stages and disciplines, the same overall patterns emerge.
They’re also patterns I recognise myself from when I came to academia from documentary-making in 2016, and which spurred me to start working for systemic change. Issues with the culture around research don’t just affect the researchers (important though that is), they affect the quality of research – the robustness of the whole evidence base on which our societies base their most vital decisions. Health, economics, education, conservation, energy, and sustainability all rely on published research. But can they be confident in what’s out there?
Publish, or don’t publish
Research publications – peer reviewed summaries of research findings in academic journals – are supposed to form the evidence base on which the rest of society can rely. While understanding that one publication does not a firm finding make, policymakers, journalists and researchers should be able to review and read the academic literature to find out how well a policy or treatment works, the strength of that evidence, whether there is counter-evidence, and what hasn’t yet been tried and tested.
But for researchers, publications have another purpose. More than 60 per cent in the report’s survey rated what they have published as having a “strong influence” on how they or their research are assessed for promotion and funding. Publish or perish has long been the slogan of the research community. And this affects what gets published.
Just over half of researchers in the survey said that they had carried out work that they then hadn’t published. This is work that has been paid for, and carried out, but which no one will know about. If researchers feel under such pressure to publish, then, why aren’t they publishing every piece of work they do?
The commonest reason for not writing up work to share was lack of time – getting a publication through a good journal’s review processes now takes many months, even years. You can’t blame researchers for being highly selective on what projects they choose to take on and take through the publication process given that their careers rest mainly on that publication record. Anything that’s not efficiently going to add to it is using up time with little to no benefit for them.
Telling stories
What’s going on with the publication process, then? Why is it such a time-sink, and hence a bottleneck for research? About a third of those who said they hadn’t published research said it was because it didn’t make a nice, neat story; a third because it didn’t have sufficient impact; a third that they didn’t have enough data.
Journals themselves have pressures too – they compete to attract papers that are likely to get read and cited the most, which in turn will attract researchers to publish with them who want to get read and cited more. This means that they tend to guide authors to aim for concise, easy-to-read, impactful writing.
As one of the interviewees for the report referred to themselves as a “novel writer” instead of a researcher. Another researcher described how those who assess the quality of research are “heavily persuaded by writing quality, particularly in the idea of storytelling quality, and then making a sound-bitey type point.”
Researchers, then, to stay in jobs, have to allocate their time and resources where they think they will be most likely to pay dividends. They do research that they can easily get published and funded (over 45 per cent said that “trendiness” and “novelty” of the topics were important) and that they write it up in a way that journals will like (a nice story, with positive results). Bottom of the list of things that researchers thought was important in assessments of their work was open research practices – ensuring that they make the full details of their work available for others to check or to build on.
The sum result of this is that the published evidence base, that we all rely on as a society, tends not to include “things that didn’t work.” Research into rare diseases, specific and detailed work in areas that won’t have broad readership, and research into unsexy topics, are also all under-represented. The pressure on researchers to publish has also led to unscrupulous journals that will publish pretty much anything (for a fee).
Can things get better?
It’s an untenable situation, and one that can be improved. We need to make publication an easier process (whilst still allowing critique and peer review). We need to decide what “good research” looks like – and how it can be judged.
How can we do that? There are plenty of suggestions. For example, publication needs to be less resource-intensive, meaning there is less pressure for researchers to be highly selective over what they choose to write up. Preprint servers are a step towards that, allowing researchers to publish their research freely and quickly, as are other publication platforms such as my own Octopus.ac, which aims to allow shorter (and hence quicker) publications. Then we need better criteria for assessing work. An easy win here would be moving open research practices – currently bottom of the list of perceived importance to researchers – to the top. Because you can’t judge the quality of work that you can’t actually see.
The incentives that researchers are under are not immutable laws of the universe – they are put in place by those that assess research and researchers: funders and employing institutions. They have the power to change what researchers are assessed on. They can change where researchers put their resources, and through that, the quality of the research we all rely on. For the sake of all of us, I think they must.
If you’re funding research, ask for a report and publish a public version of it.