The scientific process is broken. The tenure process, “publish or perish” mentality, and the insufficient review process of academic journals mean that researchers spend less time solving important puzzles and more time pursuing publication. But that wasn’t always the case.
In 1962, chemist and social scientist Michael Polyani described scientific discovery as a spontaneous order, likening it to Adam Smith’s invisible hand. In “The Republic of Science: Its Political and Economic Theory,” originally printed in Minerva magazine, Polyani used an analogy of many people working together to solve a jigsaw puzzle to explain the progression of scientific discovery.
Polanyi begins: “Imagine that we are given the pieces of a very large jigsaw puzzle, and … it is important that our giant puzzle be put together in the shortest possible time. We would naturally try to speed this up by engaging a number of helpers; the question is in what manner these could be best employed.”
The only way the assistants can effectively co-operate, and surpass by far what any single one of them could do, is to let them work on putting the puzzle together in sight of the others so that every time a piece of it is fitted in by one helper, all the others will immediately watch out for the next step that becomes possible in consequence.
Under this system, each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated. We have here in a nutshell the way in which a series of independent initiatives are organized to a joint achievement by mutually adjusting themselves at every successive stage to the situation created by all the others who are acting likewise.
Polyani’s faith in this process, decentralized to academics around the globe, was strong. He claimed, “The pursuit of science by independent self-co-ordinated initiatives assures the most efficient possible organization of scientific progress.”
But somewhere in the last 54 years, this decentralized, efficient system of scientific progress seems to have veered off course. The incentives created by universities and academic journals are largely to blame.
The National Academies of Science noted last year that there has been a tenfold increase since 1975 in scientific papers retracted because of fraud. A popular scientific blog, Retraction Watch, reports daily on retractions, corrections, and fraud from all corners of the scientific world.
Some argue that such findings aren’t evidence that science is broken — just very difficult. News “explainer” Vox recently defended the process, calling science “a long and grinding process carried out by fallible humans, involving false starts, dead ends, and, along the way, incorrect and unimportant studies that only grope at the truth, slowly and incrementally.”
Of course, finding and correcting errors is a normal and expected part of the scientific process. But there is more going on.
A recent article in Proceedings of the National Academy of Sciences documented that the problem in biomedical and life sciences is more attributable to bad actors than human error. Its authors conducted a detailed review of all 2,047 retracted research articles in those fields, which revealed that only 21.3 percent of retractions were attributable to error. In contrast, 67.4 percent of retractions were attributable to misconduct, including fraud or suspected fraud (43.4 percent), duplicate publication (14.2 percent), and plagiarism (9.8 percent).
Even this article on FiveThirtyEight, which attempts to defend the current scientific community from its critics, admits, “bad incentives are blocking good science.”
Polanyi doesn’t take these bad incentives into account—and perhaps they weren’t as pronounced in 1960s England as they are in the modern United States. In his article, he assumes that professional standards are enough to ensure that contributions to the scientific discussion would be plausible, accurate, important, interesting, and original. He fails to mention the strong incentives, produced by the tenure process, to publish in journals of particular prestige and importance.
This “publish or perish” incentive means that researchers are rewarded more for frequent publication than for dogged progress towards solving scientific puzzles. It has also led to the proliferation of academic journals — many lacking the quality control we have come to expect in academic literature. This article by British pharmacologist David Colquhoun concludes, “Pressure on scientists to publish has led to a situation where any paper, however bad, can now be printed in a journal that claims to be peer-reviewed.”
Academic journals, with their own internal standards, exacerbate this problem.
Science recently reported that less than half of 100 studies published in 2008 in top psychology journals could be replicated successfully. The Reproducibility Project: Psychology, led by Brian Nosek of the University of Virginia, was responsible for the effort and included 270 scientists who re-ran other people’s studies.
The rate of reproducibility was likely low because journals give preference to “new” and exciting findings, damaging the scientific process. The Economist reported in 2013 that “‘Negative results’ now account for only 14% of published papers, down from 30% in 1990” and observed, “Yet knowing what is false is as important to science as knowing what is true.”
These problems, taken together, create an environment where scientists are no longer collaborating to solve the puzzle. They are instead pursuing tenure and career advancement.
But the news is not all bad. Recent efforts for science to police itself are beginning to change researchers’ incentives. The Reproducibility Project (mentioned above) is part of a larger effort called the Open Science Framework (OSF). The OSF is a “scholarly commons” that works to improve openness, integrity and reproducibility of research.
Similarly, the Center for Scientific Integrity was established in 2014 to promote transparency and integrity in science. Its major project, Retraction Watch, houses a database of retractions that is freely available to scientists and scholars who want to improve science.
A new project called Heterodox Academy will help to address some research problems in the social sciences. The project has been created to improve the diversity of viewpoints in the academy. Their work is of great importance; psychologists have demonstrated the importance of such diversity for enhancing creativity, discovery, and problem solving.
These efforts will go a long way to restoring the professional standards that Polyani thought were essential to ensure that research remains plausible, accurate, important, interesting, and original. But ultimately, the tenure process and peer review must change in order to save scientific integrity.
Jenna Robinson is director of outreach at the Pope Center for Higher Education Policy.