To catch you up - in our past season, we spoke to 10 leading scientists and entrepreneurs dedicated to improving science.
We started our series with Gustav Nilsonne (Karolinska Institutet), who introduced the triple crisis in modern science: functionality, accessibility, and reproducibility. Throughout the season, we could understand better and find solutions for each crisis.
For the functionality crisis, Erik Schultes from the GO FAIR Foundation explained how Findable, Accessible, Interoperable, and Reproducible (FAIR) data is a prerequisite for improving empirical research. The FAIR approach can not only stop the loss of valuable scientific data, it can also improve the robustness of science, increase scientific productivity, and save tax payers >10 billion EUR annually in Europe alone!
With Juan Benet, we delved deeper into the issue of link rot and content drift on the current Internet. Juan founded Protocol Labs and invented the IPFS protocol, which solves these problems with cryptographic content identifiers (CIDs). He also explained how essential these identifiers are to enable truly FAIR metadata that returns research instead of 404 errors.
James Boyd from Wolfram Institute shared with us his vision of how automation will change science and how it can free humans to allocate their cognitive capital to creative and higher-level tasks.
Regarding the crisis of accessibility, Daniel Hook (CEO of Digital Science) rooted our understanding of a research publication as originating from letters in the 17th century. He broke down how this respectable medium now fails to meet the criteria of modern scientific discourse and explained the necessary building blocks for a new, better form of scholarly communication.
Big data is becoming the “new normal” in many fields of science. But with big data come big problems: It costs a lot of money to store and share this data, and downloading it can take weeks or sometimes even requires the shipment of many hard drives. In response, David Aronchick presented Baccalhau as a powerful and versatile solution: A tool that allows researchers to run compute jobs on the servers where the data is already stored, without ever needing to transfer it.
Regarding the crisis of replicability, Gideon Nave (University of Pennsylvania) told us the story of oxytocin research and how a widely cited study in Nature, based on a small sample experiment, led to a widespread belief that oxytocin increases trust in humans. Several much larger follow-up studies failed to replicate the original findings, but the scientific literature failed to self-correct. Instead, studies taking the original result for granted kept proliferating and the original results continue to garner more citations.
Josh Nicholson expanded on how simple citation counts can be misleading and hypothesized a new metric - the R factor - that can highlight when research is contentious. He built Scite, a tool to see citations in context and clarify how trustworthy a study with many citations might be.
Marcus Munafo (University of Bristol) explained how the UK Reproducibility Network begins to address the social side of the social-technical academic system. In the UKRN, researchers, funders, and academic and government institutions collaborate to improve the replicability of science. This peer-led consortium approach has since spread throughout Europe and Canada and has begun to bridge the gap between the current incentives of researchers and the needs of science.
Finally, Davide Grossi (Universities of Groningen and Amsterdam) showed us new mechanisms in peer review that eliminate problematic strategic incentives and help to reduce the workload of referees, leading to better outcomes.