Academic journals need to incentivize sharing null results

Latest

Imagine you’re a researcher trying to prove that wealth is positively correlated with happiness. You conduct your study, but you don’t find enough evidence to support your hypothesis (a null result) and ultimately decide your research isn’t worth publishing. You wouldn’t be alone. Only around 20% of null results in the social sciences are published, compared to around 60% of studies that had statistically strong results. This bias might seem harmless and even reasonable at first glance, but it is undermining the validity of scientific research and must be corrected. 

If positive results are disproportionately published, some hypotheses can seem more robust than they are. Imagine there is one published study that has produced a strong result in favour of a hypothesis. Two more studies are conducted on the subject. Let’s imagine that one corroborated the positive result, and another didn’t find enough evidence to support it, perhaps by finding only a statistically insignificant correlation. In reality, two thirds of the studies found strong evidence to support the hypothesis. However, because the corroborating positive result is much more likely to be published than the null result, scientists going through published work could conclude that 100% of studies supported the hypothesis. As a result, they’re probably going to be more confident that the hypothesis is true than they should be. 

The bias towards statistically significant results (also known as the file drawer problem because null results are filed away instead of published) has an impact on how researchers behave as well. Landing a job as a professor is partly dependent on the quantity and quality of papers authored, particularly at research-focused universities. When career advancement is dependent on being published and there is a publication bias towards positive results, then there is an incentive for researchers to find affirmative results. This can encourage them to engage in questionable methods, skewing data either intentionally or subconsciously. 

Combined, these effects have led to a replication crisis. A 2015 study conducted 100 replication attempts on 98 papers in the field of psychology. Although 97% of the original papers had found statistically significant results, only 39% of the replication attempts were able to corroborate this. This suggests that nearly two-thirds of psychology research published could be unreliable. The issue extends beyond psychology and into fields where accuracy is even more important. A study tried to replicate 53 pre-clinical cancer research papers and found that only 11% of replications corroborated the positive results of the original. Clinical trials often strongly rely on pre-clinical research to decide where to invest limited time and resources, making it crucial to make sure that this research is as reliable as possible. 

Finally, it’s worth noting that not publishing null results can waste other researchers’ time. Imagine you have a hypothesis you’re very excited about. You conduct an expensive study to try to prove that your hypothesis is true but find no evidence for it, completely unaware that dozens of other researchers have had the same idea and also obtained null results. If these results had been published, you probably wouldn’t have wasted time and money on your expensive, elaborate research. 

It’s clear that not publishing null results is causing inefficiency and lowering reliability. Luckily there are ways to fix this problem. Perhaps the most obvious way of doing this is to create journals dedicated to publishing studies that didn’t find conclusive evidence, such as the Journal of Negative Results in Biomedicine. Another initiative in this field can be providing prizes for high-quality null results. Examples of this include the Negative Results Prize awarded by the Journal of Negative Results in Biomedicine, as well as the ECNP Preclinical Network Data Prize, which is focused on research in neuroscience and provides a $10,000 grant for the winner. There is a need for more journals and prizes dedicated to null results in order to incentivize researchers to publish their findings.

Another way of increasing the number of negative findings published is to change the way research is assessed and submitted to journals. Normally, a study is conducted and results are obtained before peer review and publication. The Registered Reports format changes this paradigm completely. Studies are accepted by journals before they’re even conducted. This allows them to be analysed based on how rigorous their methodology is, rather than how significant or interesting the results are. So, long as a study follows the methodology it outlined, the journal commits to publishing it regardless of the findings. Furthermore, all materials and data have to be made available to make sure the study can be replicated. 

The Registered Reports format completely eliminates publication bias against null results. Since results aren’t known when the study is accepted, journals can’t discriminate. Furthermore, its methodology-focused approach incentivizes scientists to be as rigorous and impartial as possible if they want to be published. Adopting the Registered Reports as the standard format across most journals would go a long way towards improving the validity and solving the replication crisis. Although the issue is extremely widespread, there are solutions which can be disseminated across multiple fields of research. By applying these, academia can rid itself of the publication bias against null results once and for all. 

Featured image: retrieved from https://pixabay.com/photos/files-paper-office-paperwork-stack-1614223/

Sabina Narvaez
Sabina Narvaez
Originally from Mexico, but mostly grew up abroad and has Spanish nationality. Studies Philosophy, Politics, Law and Economics and mostly writes about these topics. Also interested in sustainability.

More from Author

Related

LEAVE A REPLY

Please enter your comment!
Please enter your name here