In 2005, Kosfeld et al. claimed that 'Oxytocin increases trust in humans'. Nature published their research, and this study quickly became a centerpiece for media outlets and pop science narratives revolving around human trust. Citations skyrocketed, funders rallied behind the study, and researchers were eager to build upon what seemed like groundbreaking results.
A few years later, Gideon Nave and his colleagues conducted a meta-review, unearthing multiple studies that failed to replicate the original oxytocin findings. One of the original authors attempted to replicate the findings in a study published 15 years later. They found no effect of Oxytocin on trust.
Today, the original publication remains unaltered. With an impressive 5,169 citations and growing, it's still published in Nature, with no indication that it has failed to replicate.
Consider this: the scientific community invested 17 years of resources and continues to invest more each year into a big claim that doesn't replicate. This begs consideration for a) why the scientific community took so long to verify a paper that was so critical to research? And b) Why does it still stand after all repeat studies found no association?
The heart of the issue lies in how hard a systematic replication has to fight for resources, and the core of that fight is the misaligned incentives in the journal system. Journals are simultaneously controlled by the need to gain readers through attention-grabbing headlines whilst functioning as gatekeepers who dictate 'good' science.
But why do journals matter? Researchers are hungry (sometimes literally) for publications because they help secure funding and promotions. But replications aren't new or attention-grabbing. By definition, they're repeating a previous study. Researchers thus find it challenging to publish replications in high-impact journals. Replicability - a fundamental requirement of trustworthy science - is left up to goodwill.
Two questions emerge from this story:
What does it take to incentivise good science instead of click bait? Journals like PLOS ONE that focus on publishing solid science, independent from novelty claims or statistical significance levels, are important experiments towards this goal.
When we have tried to replicate a paper unsuccessfully and published the results - how do ensure that not only the original, false-positive, results get cited? Scite is helping with that by putting citations into context.