In 2010, two well-known economists, Carmen Reinhart and Kenneth Rogoff, published a paper confirming what many fiscally conservative politicians had long suspected: that a country’s economic growth stalls if public debt increases above a certain percentage of GDP. The article fell on the receptive ears of Britain’s future chancellor, George Osborne, who quoted it repeatedly in a speech outlining what would become the political playbook of the era of austerity: cut public services to pay down the national debt.
Reinhart and Rogoff’s paper had just one problem. They inadvertently omitted five countries from their analysis: providing numbers for only 15 countries, rather than the 20 they thought they had selected in their spreadsheet. When lesser-known economists corrected this error and several other irregularities, the most attention-grabbing part of the results disappeared. The link between debt and GDP still existed, but the effects of high debt were more subtle than the drastic gap mentioned in Osborne’s speech.
Scientists—like all of us—are not immune to mistakes. “It’s clear that errors are everywhere, and a small fraction of those errors will change the conclusions of the articles,” says Malte Elson, a professor at the University of Bern in Switzerland, whose research interests include research methods. The problem is that few people look for these errors. Reinhart and Rogoff’s errors were only discovered in 2013 by an economics student whose professors asked his class to try to replicate the findings in leading economics papers.
Together with fellow metascience researchers Ruben Arsland and Ian Hussey, Elson developed a way to systematically find flaws in scientific research. Project – the so-called MISTAKE— is modeled on bug bounties in the software industry, where hackers are rewarded for finding bugs in code. In Elson’s project, researchers are paid for possible bugs and receive bonuses for each verified bug they detect.
The idea arose from a discussion between Elson and Arsland, who encourages scientists to find errors in his own work by offering them a beer if they identify a typo (maximum three per paper) and a sum of 400 euros ($430) for an error that changes the main conclusion of the paper . “We both realized that the papers in our fields were completely wrong due to provable errors, but correcting the data was extremely difficult,” Elson says. All these public errors could be a huge problem, Elson argued. If a graduate student devoted her degree to a result that turned out to be an error, it could mean tens of thousands of dollars wasted.
Error checking is not a standard part of publishing scientific papers, says Hussey, a metascience researcher in Elson’s lab in Bern. When the article is accepted by a scientific journal – e.g Nature Or Science–is sent to several experts in the field who provide their opinions on whether the article is of high quality, logically sound and makes a valuable contribution to the field. However, reviewers typically do not check for errors and, in most cases, do not have access to the raw data or code they would need to root out errors.
