Tag Archive for: peer review

Networks Topple Scientific Dogma by Max Borders

Science is undergoing a wrenching evolutionary change.

In fact, most of what we consider to be carried out in the name of science is dubious at best, flat wrong at worst. It appears we’re putting too much faith in science — particularly the kind of science that relies on reproducibility.

In a University of Virginia meta-study, half of 100 psychology study results could not be reproduced.

Experts making social science prognostications turned out to be mostly wrong, according to political science writer Philip Tetlock’s decades-long review of expert forecasts.

But there is perhaps no more egregious example of bad expert advice than in the area of health and nutrition. As I wrote last year for Voice & Exit:

For most of our lives, we’ve been taught some variation on the food pyramid. The advice? Eat mostly breads and cereals, then fruits and vegetables, and very little fat and protein. Do so and you’ll be thinner and healthier. Animal fat and butter were considered unhealthy. Certain carbohydrate-rich foods were good for you as long as they were whole grain. Most of us anchored our understanding about food to that idea.

“Measures used to lower the plasma lipids in patients with hyperlipidemia will lead to reductions in new events of coronary heart disease,” said the National Institutes of Health (NIH) in 1971. (“How Networks Bring Down Experts (The Paleo Example),” March 12, 2015)

The so-called “lipid theory” had the support of the US surgeon general. Doctors everywhere fell in line behind the advice. Saturated fats like butter and bacon became public enemy number one. People flocked to the supermarket to buy up “heart healthy” margarines. And yet, Americans were getting fatter.

But early in the 21st century something interesting happened: people began to go against the grain (no pun) and they started talking about their small experiments eating saturated fat. By 2010, the lipid hypothesis — not to mention the USDA food pyramid — was dead. Forty years of nutrition orthodoxy had been upended. Now the experts are joining the chorus from the rear.

The Problem Goes Deeper

But the problem doesn’t just affect the soft sciences, according to science writer Ron Bailey:

The Stanford statistician John Ioannidis sounded the alarm about our science crisis 10 years ago. “Most published research findings are false,” Ioannidis boldly declared in a seminal 2005 PLOS Medicine article. What’s worse, he found that in most fields of research, including biomedicine, genetics, and epidemiology, the research community has been terrible at weeding out the shoddy work largely due to perfunctory peer review and a paucity of attempts at experimental replication.

Richard Horton of the Lancet writes, “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” And according Julia Belluz and Steven Hoffman, writing in Vox,

Another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)

Contrast the progress of science in these areas with that of applied sciences such as computer science and engineering, where more market feedback mechanisms are in place. It’s the difference between Moore’s Law and Murphy’s Law.

So what’s happening?

Science’s Evolution

Three major catalysts are responsible for the current upheaval in the sciences. First, a few intrepid experts have started looking around to see whether studies in their respective fields are holding up. Second, competition among scientists to grab headlines is becoming more intense. Third, informal networks of checkers — “amateurs” — have started questioning expert opinion and talking to each other. And the real action is in this third catalyst, creating as it does a kind of evolutionary fitness landscape for scientific claims.

In other words, for the first time, the cost of checking science is going down as the price of being wrong is going up.

Now, let’s be clear. Experts don’t like having their expertise checked and rechecked, because their dogmas get called into question. When dogmas are challenged, fame, funding, and cushy jobs are at stake. Most will fight tooth and nail to stay on the gravy train, which can translate into coming under the sway of certain biases. It could mean they’re more likely to cherry-pick their data, exaggerate their results, or ignore counterexamples. Far more rarely, it can mean they’re motivated to engage in outright fraud.

Method and Madness

Not all of the fault for scientific error lies with scientists, per se. Some of it lies with methodologies and assumptions most of us have taken for granted for years. Social and research scientists have far too much faith in data aggregation, a process that can drop the important circumstances of time and place. Many researchers make inappropriate inferences and predictions based on a narrow band of observed data points that are plucked from wider phenomena in a complex system. And, of course, scientists are notoriously good at getting statistics to paint a picture that looks like their pet theories.

Some sciences even have their own holy scriptures, like psychology’s Diagnostic and Statistical Manual. These guidelines, when married with government funding, lobbyist influence, or insurance payouts, can protect incomes but corrupt practice.

But perhaps the most significant methodological problem with science is over-reliance on the peer-review process. Peer review can perpetuate groupthink, the cartelization of knowledge, and the compounding of biases.

The Problem with Expert Opinion

The problem with expert opinion is that it is often cloistered and restrictive. When science starts to seem like a walled system built around a small group of elites (many of whom are only sharing ideas with each other) — hubris can take hold. No amount of training or smarts can keep up with an expansive network of people who have a bigger stake in finding the truth than in shoring up the walls of a guild or cartel.

It’s true that to some degree, we have to rely on experts and scientists. It’s a perfectly natural part of specialization and division of labor that some people will know more about some things than you, and that you are likely to need their help at some point. (I try to stay away from accounting, and I am probably not very good at brain surgery, either.) But that doesn’t mean that we shouldn’t question authority, even when the authority knows more about their field than we do.

The Power of Networks

But when you get an army of networked people — sometimes amateurs — thinking, talking, tinkering, and toying with ideas — you can hasten a proverbial paradigm shift. And this is exactly what we are seeing.

It’s becoming harder for experts to count on the vagaries and denseness of their disciplines to keep their power. But it’s in cross-disciplinary pollination of the network that so many different good ideas can sprout and be tested.

The best thing that can happen to science is that it opens itself up to everyone, even people who are not credentialed experts. Then, let the checkers start to talk to each other. Leaders, influencers, and force-multipliers will emerge. You might think of them as communications hubs or bigger nodes in a network. Some will be cranks and hacks. But the best will emerge, and the cranks will be worked out of the system in time.

The network might include a million amateurs willing to give a pair of eyes or a different perspective. Most in this army of experimenters get results and share their experiences with others in the network. What follows is a wisdom-of-crowds phenomenon. Millions of people not only share results, but challenge the orthodoxy.

How Networks Contribute to the Republic of Science

In his legendary 1962 essay, “The Republic of Science,” scientist and philosopher Michael Polanyi wrote the following passage. It beautifully illustrates the problems of science and of society, and it explains how they will be solved in the peer-to-peer age:

Imagine that we are given the pieces of a very large jigsaw puzzle, and suppose that for some reason it is important that our giant puzzle be put together in the shortest possible time. We would naturally try to speed this up by engaging a number of helpers; the question is in what manner these could be best employed.

Polanyi says you could progress through multiple parallel-but-individual processes. But the way to cooperate more effectively

is to let them work on putting the puzzle together in sight of the others so that every time a piece of it is fitted in by one helper, all the others will immediately watch out for the next step that becomes possible in consequence. Under this system, each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated. We have here in a nutshell the way in which a series of independent initiatives are organized to a joint achievement by mutually adjusting themselves at every successive stage to the situation created by all the others who are acting likewise.

Just imagine if Polanyi had lived to see the Internet.

This is the Republic of Science. This is how smart people with different interests and skill sets can help put together life’s great puzzles.

In the Republic of Science, there is certainly room for experts. But they are hubs among nodes. And in this network, leadership is earned not by sitting atop an institutional hierarchy with the plumage of a postdoc, but by contributing, experimenting, communicating, and learning with the rest of a larger hive mind. This is science in the peer-to-peer age.

Max BordersMax Borders

Max Borders is Director of Idea Accounts and Creative Development for Emergent Order. He was previously the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit.