Scientists, Please Describe Your Failures

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail

We don’t ask people in other professions to put their failures on display, but it’s vital for speeding up progress in crucial areas of research from climate change to medicine and public health.

By Ijad Madisch

Ask any budding director if they would like to see the first iterations of Francis Ford Coppola’s Godfather. I don’t think many would pass up the opportunity to see Coppola’s process from filming, to editing, to deciding what makes the final cut. 

Indeed, people in nearly any occupation, from painters to journalists to architects could learn from failed iterations of the respective masters of their crafts. Yet in all these fields, we don’t expect—nor do we get—any of this. We generally only see the final, perfected product.

In the sciences, however, I want to shift this thinking. I want researchers to share everything from start to finish. Why? Because we need them to. Their failures, if seen, could stop another researcher from making the same mistakes. What’s more, knowing what doesn’t work will help researchers—or computers, in the future—deduce what might work, and in turn, speed up scientific progress.

This scientific progress is critical if we are going to tackle global challenges; preventing pandemics and finding sustainable energy sources that will fuel growing societies. However, if other fields are any indication, getting to a point where sharing failed scientific results is commonplace will be hard and take time. It will be worth it though, because the benefits are immense. 

From my experience, 99 percent of work in research never makes it into the final, published article. Yet, in the past, that article was all we’d see. Not only does this give the public a distorted view of the scientific process, but it also slows progress for other researchers.

RELATED: FAILED EXPERIMENTS MOVE SCIENCE FORWARD

Take my own research, for example. I started my medical research in 2002, and ended it in 2010. What I’ve got to show for these eight years are my thesis, 18 articles, 17 conference papers and 69 datasets. What isn’t seen is the thousands of hours I spent working on things that yielded results that I didn’t expect or simply didn’t work.

For people like me who left academia, the hard drives full of negative results may already be lost. To prevent this from happening to others, ResearchGate, the professional network for scientists I founded with two friends nine years ago, encourages researchers to document their entire research process step-by-step, publishing everything. Along the way, we hope that they will also share things that didn’t work out.

However, I understand the barriers to achieving this. Perhaps the biggest barrier is simply putting your hand up and saying, “Hey, I thought this would work but it didn’t.” This, in itself, is just another finding. But maybe you’re afraid of someone else interpreting it as failure. What’s more, writing up and publishing a negative result is to do something that largely benefits others. You know it didn’t work and have already learned from it. Most people wouldn’t blame you from wanting to move on and get started on the next thing. But despite, or maybe because of this, ResearchGate members have started sharing their negative results.

Take Wiebke Kämper. She wanted to find a faster way to work out which flowers bumblebees were visiting. Rather than using a traditional and time-consuming observational method, she decided to try using the chemical footprints that bumblebees leave behind when they visit a flower. Early experiments were positive, but tests in the field were not successful. By publishing her negative results, she insured that others could save time and work on other methods to get behind bumblebees’ floral preferences.

Or consider surgeon Anees Chagpar from Yale University, who takes the business school mantra “fail early, fail often” to heart in her research. She hypothesized that surgeons conducting breast conserving surgery for breast cancer patients could benefit from a three-dimensional model. However, she conducted a study and found the model made no difference. Publishing these results means other researchers can invest their time in other options, increasing the chance that they will discover results that improve outcomes for patients.

I deeply respect researchers like Kämper and Chagpar who have the courage to share these valuable findings with their peers, advancing their own, and their peers’ work. Science is inherently collaborative. Reporting negative results is scary, but it means our colleagues won’t waste their time and resources repeating our mistakes. In this spirit, feel free to check out my ResearchGate profile for the failed iterations of this article.

This article originally appeared on Scientific American and ResearchGate.

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail