Posts

Why We Need to Make Mistakes: Innovation Is Better than Efficiency by Sandy Ikeda

“I think it is only because capitalism has proved so enormously more efficient than alternative methods that is has survived at all,” Milton Friedman told economist Randall E. Parker for Parker’s 2002 book, Reflections on the Great Depression.

But I think innovation, not efficiency, is capitalism’s greatest strength. I’m not saying that the free market can’t be both efficient and innovative, but it does offer people a strong incentive to abandon the pursuit of efficiency in favor of innovation.

What Is Efficiency?

In its simplest form, economic efficiency is about given ends and given means. Economic efficiency requires that you know what end, among all possible ends, is the most worthwhile for you to pursue and what means to use, among all available means, to attain that end. You’re being efficient when you’re getting the highest possible benefit from an activity at the lowest possible cost. That’s a pretty heavy requirement.

Being inefficient, then, implies that for a given end, the benefit you get from that end is less than the cost of the means you use to achieve it. Or, as my great professor, Israel Kirzner, puts it, If you want to go uptown, don’t take the downtown train.

What Is Innovation?

Innovation means doing something significantly novel. It could be doing an existing process in a brand new way, such as being the first to use a GPS tracking system in your fleet of taxis. Or, innovation could mean doing something that no one has ever done before, such as using smartphone technology to match car owners with spare time to carless people who need to get somewhere in a hurry, à la Uber.

Innovation, unlike efficiency, entails discovering novel means to achieve a given end, or discovering an entirely new end. And unlike efficiency, in which you already know about all possible ends and means, innovation takes place onlywhen you lack knowledge of all means, all ends, or both.

Sometimes we mistakenly say someone is efficient when she discovers a new way to get from home to work. But that’s not efficiency; that’s innovation. And a person who copies her in order to reduce his commute time is not an innovator — but he is being efficient. The difference hinges on whether you’re creating new knowledge.

Where’s the Conflict?

Starting a business that hasn’t been tried before involves a lot of trial and error. Most of the time the trials, no matter how well thought out, turn out to contain errors. The errors may lie in the means you use or in the particular end you’re pursuing.

In most cases, it takes quite a few trials and many, many errors before you hit on an outcome that has a high enough value and low enough costs to make the enterprise profitable.) Is that process of trial and error, of experimentation, an example of economic efficiency? It is not.

If you begin with an accurate idea both of the value of an end and of all the possible ways of achieving that end, then you don’t need to experiment. Spending resources on trial and error would be wasteful. It’s then a matter of execution, which isn’t easy, but the real heavy lifting in the market process, both from the suppliers’ and the consumers’ sides, is done by trying out new things — and often failing.

Experimentation is messy and apparently wasteful, whether in science or in business. You do it precisely because you’re not sure how to answer a particular question, or because you’re not even sure what the right question is. There are so many failures. But in a world where our knowledge is imperfect, which is the world we actually live in, most of what we have to do in everyday life is to innovate — to discover things we didn’t know we didn’t know — rather than trying to be efficient. Being willing to suffer failure is the only way to make discoveries and to introduce innovations into the world.

Strictly speaking, then, if you want to innovate, being messy is unavoidable, and messiness is not efficient. Yet, if you want to increase efficiency, you can’t be messy. Innovation and efficiency usually trade off for each other because if you’re focused on doing the same thing better and better, you’re taking time and energy away from trying to do something new.

Dynamic Efficiency?

Some have tried to describe this process of innovation as “dynamic efficiency.” It may be quibbling over words, but I think trying to salvage the concept of efficiency in this way confuses more than it clarifies. To combine efficiency and innovation is to misunderstand the essential meanings of those words.

What would it mean to innovate efficiently? I suppose it would mean something like “innovating at least cost.” But how is it possible to know, before you’ve actually created a successful innovation, whether you’ve done it at least cost? You might look back and say, “Gee, I wouldn’t have run experiments A, B, and C if only I’d known that D would give me the answer!” But the only way to know that D is the right answer is to first discover, through experimentation and failure, that A, B, and C are the wrong answers.

Both efficiency and innovation best take place in a free market. But the greatest rewards to buyers and sellers come not from efficiency, but from innovation.

Sandy IkedaSandy Ikeda

Sandy Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.

Networks Topple Scientific Dogma by Max Borders

Science is undergoing a wrenching evolutionary change.

In fact, most of what we consider to be carried out in the name of science is dubious at best, flat wrong at worst. It appears we’re putting too much faith in science — particularly the kind of science that relies on reproducibility.

In a University of Virginia meta-study, half of 100 psychology study results could not be reproduced.

Experts making social science prognostications turned out to be mostly wrong, according to political science writer Philip Tetlock’s decades-long review of expert forecasts.

But there is perhaps no more egregious example of bad expert advice than in the area of health and nutrition. As I wrote last year for Voice & Exit:

For most of our lives, we’ve been taught some variation on the food pyramid. The advice? Eat mostly breads and cereals, then fruits and vegetables, and very little fat and protein. Do so and you’ll be thinner and healthier. Animal fat and butter were considered unhealthy. Certain carbohydrate-rich foods were good for you as long as they were whole grain. Most of us anchored our understanding about food to that idea.

“Measures used to lower the plasma lipids in patients with hyperlipidemia will lead to reductions in new events of coronary heart disease,” said the National Institutes of Health (NIH) in 1971. (“How Networks Bring Down Experts (The Paleo Example),” March 12, 2015)

The so-called “lipid theory” had the support of the US surgeon general. Doctors everywhere fell in line behind the advice. Saturated fats like butter and bacon became public enemy number one. People flocked to the supermarket to buy up “heart healthy” margarines. And yet, Americans were getting fatter.

But early in the 21st century something interesting happened: people began to go against the grain (no pun) and they started talking about their small experiments eating saturated fat. By 2010, the lipid hypothesis — not to mention the USDA food pyramid — was dead. Forty years of nutrition orthodoxy had been upended. Now the experts are joining the chorus from the rear.

The Problem Goes Deeper

But the problem doesn’t just affect the soft sciences, according to science writer Ron Bailey:

The Stanford statistician John Ioannidis sounded the alarm about our science crisis 10 years ago. “Most published research findings are false,” Ioannidis boldly declared in a seminal 2005 PLOS Medicine article. What’s worse, he found that in most fields of research, including biomedicine, genetics, and epidemiology, the research community has been terrible at weeding out the shoddy work largely due to perfunctory peer review and a paucity of attempts at experimental replication.

Richard Horton of the Lancet writes, “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” And according Julia Belluz and Steven Hoffman, writing in Vox,

Another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)

Contrast the progress of science in these areas with that of applied sciences such as computer science and engineering, where more market feedback mechanisms are in place. It’s the difference between Moore’s Law and Murphy’s Law.

So what’s happening?

Science’s Evolution

Three major catalysts are responsible for the current upheaval in the sciences. First, a few intrepid experts have started looking around to see whether studies in their respective fields are holding up. Second, competition among scientists to grab headlines is becoming more intense. Third, informal networks of checkers — “amateurs” — have started questioning expert opinion and talking to each other. And the real action is in this third catalyst, creating as it does a kind of evolutionary fitness landscape for scientific claims.

In other words, for the first time, the cost of checking science is going down as the price of being wrong is going up.

Now, let’s be clear. Experts don’t like having their expertise checked and rechecked, because their dogmas get called into question. When dogmas are challenged, fame, funding, and cushy jobs are at stake. Most will fight tooth and nail to stay on the gravy train, which can translate into coming under the sway of certain biases. It could mean they’re more likely to cherry-pick their data, exaggerate their results, or ignore counterexamples. Far more rarely, it can mean they’re motivated to engage in outright fraud.

Method and Madness

Not all of the fault for scientific error lies with scientists, per se. Some of it lies with methodologies and assumptions most of us have taken for granted for years. Social and research scientists have far too much faith in data aggregation, a process that can drop the important circumstances of time and place. Many researchers make inappropriate inferences and predictions based on a narrow band of observed data points that are plucked from wider phenomena in a complex system. And, of course, scientists are notoriously good at getting statistics to paint a picture that looks like their pet theories.

Some sciences even have their own holy scriptures, like psychology’s Diagnostic and Statistical Manual. These guidelines, when married with government funding, lobbyist influence, or insurance payouts, can protect incomes but corrupt practice.

But perhaps the most significant methodological problem with science is over-reliance on the peer-review process. Peer review can perpetuate groupthink, the cartelization of knowledge, and the compounding of biases.

The Problem with Expert Opinion

The problem with expert opinion is that it is often cloistered and restrictive. When science starts to seem like a walled system built around a small group of elites (many of whom are only sharing ideas with each other) — hubris can take hold. No amount of training or smarts can keep up with an expansive network of people who have a bigger stake in finding the truth than in shoring up the walls of a guild or cartel.

It’s true that to some degree, we have to rely on experts and scientists. It’s a perfectly natural part of specialization and division of labor that some people will know more about some things than you, and that you are likely to need their help at some point. (I try to stay away from accounting, and I am probably not very good at brain surgery, either.) But that doesn’t mean that we shouldn’t question authority, even when the authority knows more about their field than we do.

The Power of Networks

But when you get an army of networked people — sometimes amateurs — thinking, talking, tinkering, and toying with ideas — you can hasten a proverbial paradigm shift. And this is exactly what we are seeing.

It’s becoming harder for experts to count on the vagaries and denseness of their disciplines to keep their power. But it’s in cross-disciplinary pollination of the network that so many different good ideas can sprout and be tested.

The best thing that can happen to science is that it opens itself up to everyone, even people who are not credentialed experts. Then, let the checkers start to talk to each other. Leaders, influencers, and force-multipliers will emerge. You might think of them as communications hubs or bigger nodes in a network. Some will be cranks and hacks. But the best will emerge, and the cranks will be worked out of the system in time.

The network might include a million amateurs willing to give a pair of eyes or a different perspective. Most in this army of experimenters get results and share their experiences with others in the network. What follows is a wisdom-of-crowds phenomenon. Millions of people not only share results, but challenge the orthodoxy.

How Networks Contribute to the Republic of Science

In his legendary 1962 essay, “The Republic of Science,” scientist and philosopher Michael Polanyi wrote the following passage. It beautifully illustrates the problems of science and of society, and it explains how they will be solved in the peer-to-peer age:

Imagine that we are given the pieces of a very large jigsaw puzzle, and suppose that for some reason it is important that our giant puzzle be put together in the shortest possible time. We would naturally try to speed this up by engaging a number of helpers; the question is in what manner these could be best employed.

Polanyi says you could progress through multiple parallel-but-individual processes. But the way to cooperate more effectively

is to let them work on putting the puzzle together in sight of the others so that every time a piece of it is fitted in by one helper, all the others will immediately watch out for the next step that becomes possible in consequence. Under this system, each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated. We have here in a nutshell the way in which a series of independent initiatives are organized to a joint achievement by mutually adjusting themselves at every successive stage to the situation created by all the others who are acting likewise.

Just imagine if Polanyi had lived to see the Internet.

This is the Republic of Science. This is how smart people with different interests and skill sets can help put together life’s great puzzles.

In the Republic of Science, there is certainly room for experts. But they are hubs among nodes. And in this network, leadership is earned not by sitting atop an institutional hierarchy with the plumage of a postdoc, but by contributing, experimenting, communicating, and learning with the rest of a larger hive mind. This is science in the peer-to-peer age.

Max BordersMax Borders

Max Borders is Director of Idea Accounts and Creative Development for Emergent Order. He was previously the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit.

The Average American Today Is Richer than John D. Rockefeller by Donald J. Boudreaux

This Atlantic story reveals how Americans lived 100 years ago. By the standards of a middle-class American today, that lifestyle was poor, inconvenient, dreary, and dangerous. (Only a few years later — in 1924 — the 16-year-old son of a sitting US president would die of an infected blister that the boy got on his toe while playing tennis on the White House grounds.)

So here’s a question that I’ve asked in one form or another on earlier occasions, but that is so probing that I ask it again: What is the minimum amount of money that you would demand in exchange for your going back to live even as John D. Rockefeller lived in 1916?

21.7 million 2016 dollars (which are about one million 1916 dollars)? Would that do it? What about a billion 2016 — or 1916 — dollars? Would this sizable sum of dollars be enough to enable you to purchase a quantity of high-quality 1916 goods and services that would at least make you indifferent between living in 1916 America and living (on your current income) in 2016 America?

Think about it. Hard. Carefully.

If you were a 1916 American billionaire you could, of course, afford prime real-estate. You could afford a home on 5th Avenue or one overlooking the Pacific Ocean or one on your own tropical island somewhere (or all three). But when you traveled from your Manhattan digs to your west-coast palace, it would take a few days, and if you made that trip during the summer months, you’d likely not have air-conditioning in your private railroad car.

And while you might have air-conditioning in your New York home, many of the friends’ homes that you visit — as well as restaurants and business offices that you frequent — were not air-conditioned. In the winter, many were also poorly heated by today’s standards.

To travel to Europe took you several days. To get to foreign lands beyond Europe took you even longer.

Might you want to deliver a package or letter overnight from New York City to someone in Los Angeles? Sorry. Impossible.

You could neither listen to radio (the first commercial radio broadcast occurred in 1920) nor watch television. You could, however, afford the state-of-the-art phonograph of the era. (It wasn’t stereo, though. And — I feel certain — even today’s vinylphiles would prefer listening to music played off of a modern compact disc to listening to music played off of a 1916 phonograph record.) Obviously, you could not download music.

There really wasn’t very much in the way of movies for you to watch, even though you could afford to build your own home movie theater.

Your telephone was attached to a wall. You could not use it to Skype.

Your luxury limo was far more likely to break down while you were being chauffeured about town than is your car today to break down while you are driving yourself to your yoga class. While broken down and waiting patiently in the back seat for your chauffeur to finish fixing your limo, you could not telephone anyone to inform that person that you’ll be late for your meeting.

Even when in residence at your Manhattan home, if you had a hankering for some Thai red curry or Vindaloo chicken or Vietnamese Pho or a falafel, you were out of luck: even in the unlikely event that you even knew of such exquisite dishes, your chef likely had no idea how to prepare them, and New York’s restaurant scene had yet to feature such exotic fare. And while you might have had the money in 1916 to afford to supply yourself with a daily bowlful of blueberries at your New York home in January, even for mighty-rich you the expense was likely not worthwhile.

Your wi-fi connection was painfully slow — oh, wait, right: it didn’t exist. No matter, because you had neither computer nor access to the Internet. (My gosh, there weren’t even any blogs for you to read!)

Even the best medical care back then was horrid by today’s standards: it was much more painful and much less effective. (Remember young Coolidge.) Antibiotics weren’t available. Erectile dysfunction? Bipolar disorder? Live with ailments such as these. That was your only option.

You (if you are a woman) or (if you are a man) your wife and, in either case, your daughter and your sister had a much higher chance of dying as a result of giving birth than is the case today. The child herself or himself was much less likely to survive infancy than is the typical American newborn today.

Dental care wasn’t any better. Your money didn’t buy you a toothbrush with vibrating bristles. (You could, however, afford the very finest dentures.)

Despite your vanity, you couldn’t have purchased contact lenses, reliable hair restoration, or modern, safe breast augmentation. And forget about liposuction to vacuum away the results of your having dined on far too many cream-sauce-covered terrapin.

Birth control was primitive: it was less reliable and far more disruptive of pleasure than are any of the many inexpensive and widely available birth-control methods of today.

Of course, you adore precious-weacious little Rover, but your riches probably could not buy for Rover veterinary care of the sort that is routine in every burgh throughout the land today.

You were completely cut off from the cultural richness that globalization has spawned over the past century. There was no American-inspired, British-generated rock’n’roll played on electric guitars. And no reggae. Jazz was still a toddler, with only few recordings of it.

You could afford to buy the finest Swiss watches and clocks, but even they couldn’t keep time as accurately as does a cheap Timex today (not to mention the accuracy of the time kept by your smartphone).

Honestly, I wouldn’t be remotely tempted to quit the 2016 me so that I could be a one-billion-dollar-richer me in 1916. This fact means that, by 1916 standards, I am today more than a billionaire. It means, at least given my preferences, I am today materially richer than was John D. Rockefeller in 1916. And if, as I think is true, my preferences here are not unusual, then nearly every middle-class American today is richer than was America’s richest man a mere 100 years ago.

This post first appeared at Cafe Hayek.

Donald J. BoudreauxDonald J. Boudreaux

Donald Boudreaux is asenior fellow with the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center at George Mason University, a Mercatus Center Board Member, a professor of economics and former economics-department chair at George Mason University and, a former FEE president.

Zika Virus Shows It’s Time to Bring Back DDT by Diana Furchtgott-Roth

The Zika virus is spreading by mosquitoes northward through Latin America, possibly correlated with birth defects such as microcephaly in infants. Stories and photos of their abnormally small skulls are making headlines. The World Health Organization reports that four million people could be infected by the end of 2016.

On Monday, the WHO is meeting to decide how to address the crisis. The international body should recommend that the ban on DDT should be reversed, in order to kill the mosquitoes that carry Zika and malaria, a protistan parasite that has no cure.

Zika is in the news, but it is dwarfed by malaria. About 300 million to 600 million people suffer each year from malaria, and it kills about 1 million annually, 90 percent in sub-Saharan Africa. We have the means to reduce Zika and malaria — and we are not using it.

Under the Global Malaria Eradication Program, which started in 1955, DDT was used to kill the mosquitoes that carried the parasite, and malaria was practically eliminated. Some countries such as Sri Lanka, which started using DDT in the late 1940s, saw profound improvements. Reported cases fell from nearly 3 million a year to just 17 cases in 1963. In Venezuela, cases fell from over 8 million in 1943 to 800 in 1958. India saw a dramatic drop from 75 million cases a year to 75,000 in 1961.

This changed with the publication of Rachel Carson’s 1962 book, Silent Spring, which claimed that DDT was hazardous. After lengthy hearings between August 1971 and March 1972, Judge Edmund Sweeney, the EPA hearing examiner, decided that there was insufficient evidence to ban DDT and that its benefits outweighed any adverse effects. Yet, two months afterwards, then-EPA Administrator William D. Ruckelshaus overruled him and banned DDT, effective December 31, 1972.

Other countries followed, and DDT was banned in 2001 for agriculture by the Stockholm Convention on Persistent Organic Pollutants. This was a big win for the mosquitoes, but a big loss for people who lived in Latin America, Asia, and Africa.

Carson claimed that DDT, because it is fat soluble, accumulated in the fatty tissues of animals and humans as the compound moved through the food chain, causing cancer and other genetic damage. Carson’s concerns and the EPA action halted the program in its tracks, and malaria deaths started to rise again, reaching 600,000 in 1970, 900,000 in 1990 and over 1,000,000 in 1997 — back to pre-DDT levels.

Some continue to say that DDT is harmful, but others say that DDT was banned in vain. There remains no compelling evidence that the chemical has produced any ill public health effects. According to an article in the British medical journal the Lancet by Professor A.G. Smith of Leicester University,

The early toxicological information on DDT was reassuring; it seemed that acute risks to health were small. If the huge amounts of DDT used are taken into account, the safety record for human beings is extremely good. In the 1940s many people were deliberately exposed to high concentrations of DDT thorough dusting programmes or impregnation of clothes, without any apparent ill effect… In summary, DDT can cause toxicological effects but the effects on human beings at likely exposure are very slight.

Even though nothing is as cheap and effective as DDT, it is not a cure-all for malaria. But a study by the Uniformed Services University of the Health Sciences concluded that spraying huts in Africa with DDT reduces the number of mosquitoes by 97 percent compared with huts sprayed with an alternative pesticide. Those mosquitoes that do enter the huts are less likely to bite.

By forbidding DDT and relying on more expensive, less effective methods of prevention, we are causing immense hardship. Small environmental losses are inferior to saving thousands of human lives and potentially increasing economic growth in developing nations.

We do not yet have data on the economic effects of the Zika virus, but we know that countries with a high incidence of malaria can suffer a 1.3 percent annual loss of economic growth. According to a Harvard/WHO study, sub-Saharan Africa’s GDP could be $100 billion greater if malaria had been eliminated 35 years ago.

Rachel Carson died in 1964, but the legacy of Silent Spring and its recommended ban on DDT live with us today. Millions are suffering from malaria and countless others are contracting the Zika virus as a result of the DDT ban. They were never given the choice of living with DDT or dying without it. The World Health Organization should recognize that DDT has benefits, and encourage its use in combating today’s diseases.

This article first appeared at E21, a project of the Manhattan Institute.

Diana Furchtgott-RothDiana Furchtgott-Roth

Diana Furchtgott-Roth, former chief economist of the U.S. Department of Labor, is director of Economics21 and senior fellow at the Manhattan Institute.

The Rise and Fall of American Growth by Emily Skarbek

Diane Coyle has reviewed Robert Gordon’s new book (out late January), The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War.

Gordon’s central argument will be familiar to readers of his work. In his view, the main technological and productivity-enhancing innovations that drove American growth in the early to mid 20th century — electricity, internal combustion engine, running water, indoor toilets, communications, TV, chemicals, petroleum — could only happen once, have run their course, and the prospects of future growth look uninspiring. For Gordon, it is foreseeable that the rapid progress made over the past 250 years will turn out to be a unique episode in human history.

Coyle zeros in on the two main mechanisms to which Gordon attributes the slowing of growth. The first is that future innovation will be slower or its effects less important. Coyle finds this argument less convincing.

What I find odd about Gordon’s argument is his insistence that there is a kind of competition between the good old days of ‘great innovations’ and today’s innovations – which are necessarily different.

One issue is the extent to which he ignores all but a limited range of digital innovation; low carbon energy, automated vehicles, new materials such as graphene, gene-based medicine etc. don’t feature.

The book claims more recent innovations are occurring mainly in entertainment, communication and information technologies, and presents these as simply less important (while making great play of the importance of radio, telephone and TV earlier).

While I have yet to read the book, Gordon makes several similar arguments in an NBER working paper. There he gives a few examples of his view of more recent technological innovations as compared to the Great Inventions of the mid-20th century.

More familiar was the rapid development of the web and ecommerce after 1995, a process largely completed by 2005. Many one-time-only conversions occurred, for instance from card catalogues in wooden cabinets to flat screens in the world’s libraries and the replacement of punch-hole paper catalogues with flat-screen electronic ordering systems in the world’s auto dealers and wholesalers.

In other words, the benefits of the computer revolution were one time boosts, not lasting increases in labor productivity. Gordon then invokes Solow’s famous sentence that “we [could] see the computers everywhere except in the productivity statistics.” When the effects do show up, Gordon says, they fade out by 2004 and labor productivity flat lines.

Solow’s interpretation (~26 mins into the interview) of where the productivity gains went is different, and more consistent with Coyle’s deeper point. In short, the statistics themselves doesn’t capture the full gains from innovation:

And when that happened, it happened in an interesting way. It turned out when there were first clear indications, maybe 8 or 10 years later, of improvements in productivity on a national scale that could be traced to computers statistically, it turned out a large part of those gains came not in the use of the computer, but in the production of computers.

Because the cost of an item of computing machinery was falling like a stone, and the quality was at the same time, the capacity at the same time was improving. And people were buying a lot of computers, so this was not a trivial industry. …

You got big productivity gains in the production of computers and whatnot. But you could also begin to see productivity improvements on a national scale that traced to the use of computers.

Coyle’s central criticism is not just on the interpretation of the data, but on an interesting switch in Gordon’s argument:

Throughout the first two parts of the book, Gordon repeatedly explains why it is not possible to evaluate the impact of inventions through the GDP and price statistics, and therefore through the total factor productivity figures based on them — and then uses the real GDP figures to downplay modern innovation.”

Coyle’s understanding of the use and abuse of GDP figures leads her to the fundamental point:

While the very long run of real GDP figures (the “hockey stick of history”) does portray the explosion of living standards under market capitalism, one needs a much richer picture of the qualitative change brought about by innovation and variety.

This must include the social consequences too — and the book touches on these, from the rise of the suburbs to the transformation of the social lives of women.

To understand Coyle’s insights more deeply, her discussion with Russ Roberts gives a fascinating discussion of GDP (no, really!).

In my view, it seems to come down to differing views about where Moore’s Law is taking us. The exponentially increasing computational power — with increasing product quality at decreasing prices — has never happened at such a sustained pace before.

The technological Great Inventions that Gordon sees as fundamental to driving sustained growth of the past all were bursts of innovation followed by a substantial time period where entrepreneurs figured out how to effectively commodify and deliver that technology to the broader economy and society. What is so interesting about the pattern of exponential technological progress is that price/performance gains have not slowed, even as some bits of these gains have just shown signs of commodification — Uber, 3D printing, biosynthesis of living tissue, etc.

There are good reasons to think that in the past we have failed to capture all the gains from innovation in measures of total factor productivity and labor productivity, as Gordon rightly points out. But if this is true, it seems strange to me to look at the current patterns of technological progress and not see the potential for these innovations to lead to sustained growth and increases in human well-being.

This is, of course, conditional on the political economy in which innovation takes place. The second cause for low future growth for Gordon concerns headwinds slowing down whatever innovation-driven growth there might be. Here I look forward to reading the relative weights Gordon assigns to factors such as demography, education, inequality, globalization, energy/environment, and consumer and government debt. In particular, I hope to read Gordon’s own take (and others) on how the political economy environment could change the magnitude or sign of these headwinds.

The review is worth a read in advance of what will likely prove to be an important book in the debate on development and growth.

This post first appeared at Econlog, where Emily is a new guest blogger.

Emily SkarbekEmily Skarbek

Emily Skarbek is Lecturer in Political Economy at King’s College London and guest blogs on EconLog. Her website is EmilySkarbek.com. Follow her on Twitter @EmilySkarbek.​

Government Caused the ‘Great Stagnation’ by Peter J. Boettke

Tyler Cowen caused quite a stir with his e-book, The Great Stagnation. In properly assessing his work it is important to state explicitly what his argument actually is. Median real income has stagnated since 1980, and the reason is that the rate of technological advance has slowed. Moreover, the technological advances that have taken place with such rapidity in recent history have improved well-being, but not in ways that are easily measured in real income statistics.

Critics of Cowen more often than not miss the mark when they focus on the wild improvements in our real income due to quality improvements (e.g., cars that routinely go over 100,000 miles) and lower real prices (e.g., the amount of time required to acquire the inferior version of yesterday’s similar commodities).

Cowen does not deny this. Nor does Cowen deny that millions of people were made better off with the collapse of communism, the relative freeing of the economies in China and India, and the integration into the global economy of the peoples of Africa and Latin America. Readers of The Great Stagnation should be continually reminded that they are reading the author of In Praise of Commercial Culture and Creative Destruction. Cowen is a cultural optimist, a champion of the free trade in ideas, goods, services and all artifacts of mankind. But he is also an economic realist in the age of economic illusion.

What do I mean by the economics of illusion? Government policies since WWII have created an illusion that irresponsible fiscal policy, the manipulation of money and credit, and expansion of the regulation of the economy is consistent with rising standards of living. This was made possible because of the “low hanging” technological fruit that Cowen identifies as being plucked in the 19th and early 20th centuries in the US, and in spite of the policies government pursued.

An accumulated economic surplus was created by the age of innovation, which the age of economic illusion spent down. We are now coming to the end of that accumulated surplus and thus the full weight of government inefficiencies are starting to be felt throughout the economy. Our politicians promised too much, our government spends too much, in an apparent chase after the promises made, and our population has become too accustomed to both government guarantees and government largess.

Adam Smith long ago argued that the power of self-interest expressed in the market was so strong that it could overcome hundreds of impertinent restrictions that government puts in the way. But there is some tipping point at which that ability to overcome will be thwarted, and the power of the market will be overcome by the tyranny of politics. Milton Friedman used that language to talk about the 1970s; we would do well to resurrect that language to talk about today.

Cowen’s work is a subversive track in radical libertarianism because he identifies that government growth (both measured in terms of scale and scope) was possible only because of the rate of technological improvements made in the late 19th and early 20th century.

We realized the gains from trade (Smithian growth), we realized the gains from innovation (Schumpeterian growth), and we fought off (in the West, at least) totalitarian government (Stupidity). As long as Smithian growth and Schumpeterian growth outpace Stupidity, tomorrow’s trough will still be higher than today’s peak. It will appear that we can afford more Stupidity than we can actually can because the power of self-interest expressed through the market offsets its negative consequences.

But if and when Stupidity is allowed to outpace the Smithian gains from trade and the Schumpeterian gains from innovation, then we will first stagnate and then enter a period of economic backwardness — unless we curtail Stupidity, explore new trading opportunities, or discover new and better technologies.

In Cowen’s narrative, the rate of discovery had slowed, all the new trading opportunities had been exploited, and yet government continued to grow both in terms of scale and scope. And when he examines the 3 sectors in the US economy — government services, education, and health care — he finds little improvement since 1980 in the production and distribution of the services. In fact, there is evidence that performance has gotten worse over time, especially as government’s role in health care and education has expanded.

The Great Stagnation is a condemnation of government growth over the 20th century. It was made possible only by the amazing technological progress of the late 19th and early 20th century. But as the rate of technological innovation slowed, the costs of government growth became more evident. The problem, however, is that so many have gotten used to the economics of illusion that they cannot stand the reality staring them in the face.

This is where we stand in our current debt ceiling debate. Government is too big, too bloated. Washington faces a spending problem, not a revenue problem. But too many within the economy depend on the government transfers to live and to work. Yet the economy is not growing at a rate that can afford the illusion. Where are we to go from here?

Cowen’s work makes us think seriously about that question. How can the economic realist confront the economics of illusion? And Cowen has presented the basic dilemma in a way that the central message of economic realism is not only available for libertarians to see (if they would just look, or listen carefully to his podcast at EconTalk), but for anyone who is willing to read and think critically about our current political and economic situation.

The Great Stagnation signals the end of the economics of illusion and — let’s hope — paves the way for a new age of economic realism.

This post first appeared at Coordination Problem.

Peter J. BoettkePeter J. Boettke

Peter Boettke is a Professor of Economics and Philosophy at George Mason University and director of the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center. He is a member of the FEE Faculty Network.

RELATED ARTICLE: 5 Reasons Why America Is Headed to a Budget Crisis

What Can the Rich Afford that Average Americans Can’t? by Donald J. Boudreaux

Raffi Melkonian asks — as relayed by my colleague Tyler Cowen — “When can median income consumers afford the very best?”

Tyler offers a list of some of the items in the modern, market-oriented world that are as high-quality as such items get and yet are easily affordable to ordinary people. This list includes iPhones, books, and rutabagas. Indeed, this list includes nearly all foods for use in preparing home snacks and meals. I doubt very much that Bill Gates and Larry Ellison munch at home on foods — such as carrots, blueberries, peanuts, and scrambled eggs — that an ordinary American cannot easily afford to enjoy at home.

This list includes also non-prescription pain relievers, most other first-aid medicines and devices such as Band-Aids, and personal-hygiene products such as toothpaste, dental floss, and toilet paper. (I once saw a billionaire take two Bayer aspirin — the identical pain reliever that I use.) This list includes also gasoline and diesel. Probably also contact lenses.

A slightly different list can be drawn up in response to this question: When can median-income consumers afford products that, while not as high-quality as those versions that are bought by the super-rich, are nevertheless virtually indistinguishable — because they are quite close in quality — to the naked eye from those versions bought by the super-rich?

On this list would be most clothing. For example, an ordinary American man can today afford a suit that, while it’s neither tailor-made nor of a fabric as fine as are suits that I suspect are worn by most billionaires, is nevertheless close enough in fit and fabric quality to be indistinguishable by the naked eye from expensive suits worn by billionaires. (I suspect that the same is true for women’s clothing, but I’m less expert on that topic.)

Ditto for shoes, underwear, haircuts, corrective eye-wear, collars for dogs and cats, pet food, household bath towels and “linens,” tableware and cutlery, automobile tires, hand tools, most household furniture, and wristwatches.

(You’d have to get physically very close to someone wearing a Patek Philippe — and you’d have to know what a Patek Philippe is — in order to determine that that person’s wristwatch is one that you, an ordinary American, can’t afford. And you could stare at that Patek Philippe for months without detecting any superiority that it might have over your quartz-powered Timex at keeping time.)

Coffee. Tea. Beer. Wine. (There is available today a large selection of very good wines at affordable prices. These wines almost never rise to the quality of Chateau Petrus, d’yquem, or the best Montrachets, but the differences are often quite small and barely distinguishable save by true connoisseurs.)

Indeed, the more one ponders this question relayed by Tyler, the more one suspects that the shorter list would be one drawn up in response to this question: When can high-income consumers afford what median-income consumers cannot?

Such a list, of course, would be far from empty. It would include private air travel, beachfront homes, regular vacations in Tahiti and Davos, private suites at sports arenas, luxury automobiles, rooms at the Ritz, original Picassos and Warhols. (It would, by the way, include also invitations to White House dinners and private lunches with rent-creating senators, governors, and mayors.)

But I’ll bet that this latter list would be shorter than one made up in response to the question relayed by Tyler combined with one drawn up in response to the question that I pose above in the third paragraph (call this list “the combined list”).

And whether shorter or not, what other germane characteristics might distinguish the items on this last list from the combined list?

A version of this post first appeared at Cafe Hayek.

Donald J. BoudreauxDonald J. Boudreaux

Donald Boudreaux is asenior fellow with the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center at George Mason University, a Mercatus Center Board Member, a professor of economics and former economics-department chair at George Mason University and, a former FEE president.

Regulators Are Not Heroes by Adam C. Smith & Stewart Dompe

Amazon is suing thousands of “fake” reviewers, who, for a fee, have posted positive reviews for various products. These pseudo reviews violate the spirit — and possibly the functionality — of Amazon’s largely self-governed rating system. Customers rely on reviews to guide their own choices, and a wave of sponsored reviews can mislead them into choosing inferior products.

A similar theme plays out in George Akerlof and Robert Shiller’s newest behavioral economics-cum-self-help book, Phishing for Phools. The authors, both Nobel laureates, claim that an unregulated market leads to massive amounts of manipulation and deception. Just how much remains unspecified, but the general thrust of the argument is that regulatory heroes are needed to rein in villainous dealers.

Heroic Regulators?

It is no surprise then that the authors favor heroic efforts of an older progressive sort, such as the works of Alice Lakey or her modern-day counterpart Elizabeth Warren. Their work, respectively, led to the establishment of the Food and Drug Administration and the Consumer Financial Protection Bureau. These progressives are seen as heroic for taking “action not selfishly but for the public good.” The trouble with such heroes, however, is that they invariably focus not on educating consumers so that they may make better choices but on corralling the cat herd of bureaucrats and politicians into ever-expanding spheres of regulation.

While it is true that consumer regulation can provide focal points that help buyers and sellers interact — in fact, Amazon appealed to just that in its lawsuit — this truth nevertheless misses the pivotal point (and an awkward one for Akerlof and Shiller) that it is Amazon that is working to resolve the problem, not government regulators.

Make no mistake. Akerlof’s classic paper on the quality of goods in a world of imperfect information clearly outlines a problem that markets must address, but it is a problem for both consumers and the market platforms on which they participate. Those platforms have a natural incentive to promote the information consumers need in order to make more informed decisions. The incentives faced by regulators are less well aligned with consumers’ interests. (But advocates of regulation rarely ask what incentives drive government regulators.)

There is another aspect of Akerlof’s model that is telling in this regard: in equilibrium, the so-called “lemons market” should unravel as more and more consumers become frustrated with ever-decreasing levels of quality. Thus, the market platform should topple over. The trouble with this theoretical outcome is that it again fails to account for the empirical observation that it is markets that are solving market problems.

Akerlof’s co-recipient of the 2001 Nobel Prize, Michael Spence, would have no trouble with this observation. Spence noted that it is far more interesting to compare the outcomes in the market to what is possible in a world of incomplete information, not to what is found where no imperfection exists by assumption. Spence explained in his Nobel address that when facing a world of imperfect information, the asymmetry between buyer and seller “cannot be simply removed by a wave of the pen.”

Compared to What?

Even when we acknowledge that individuals may be limited in their analytical and decision-making capabilities, we must ask ourselves, “Compared to what?” As noted elsewhere in these pages, every flaw in consumers is worse in voters. Furthermore, the immediate call for greater government regulation ignores the ongoing knowledge problem: acquiring information is limited by the abilities of normal people (after all, we can’t all be heroes). Knowing which transactions to avoid is valuable information, but that knowledge must first be discovered to be shared. If this information is not readily attainable, then it is unclear how regulators will know what market processes to target, much less how to improve on them.

And if the information does exist, then there is an opportunity for entrepreneurial action to gather this information and sell it to consumers. Put another way, market failures that cause individuals to make poor decisions are themselves profit opportunities for entrepreneurs to help people make better decisions.

In a world of uncertainty, ensuring quality can be a powerful competitive advantage. Amazon wants you, the customer, to use its search and recommendation system to buy new products, products that you cannot physically touch and inspect. The review system is one method of overcoming this informational asymmetry. When the integrity of the review system is challenged, Amazon is faced with the prospect of a lower volume of transactions and therefore lower profits.

Private Heroes

This is why Amazon is acting to curtail its rogue members. Retailers can only justify high prices when they can guarantee quality. Amazon’s feedback system constitutes a significant informational subsidy to its users, and the company is willing to create this information (or have it created by users) because it leads to a higher volume of trade and the accompanying consumer benefits that Amazon brings to book readers worldwide.

What Akerlof and Shiller miss is that creating and maintaining a viable platform for trade opportunities is enormously expensive. Having customers exit the door to never return — or perhaps write negative Yelp reviews — causes instability to the market that can be fatal if left unattended.

Rather than focusing on the failure of consumers, the original sin of our humanity, we should instead notice how information entrepreneurs are enabling us to make better choices. The information revolution led by these innovators has changed the world with the costs of distribution lower than ever.These may not be the welfarist heroes of Akerlof and Shiller’s fantasy world but market troubleshooters of the one we actually occupy.

Public-spirited regulators may be the heroes we want, but they are not the heroes we need.

Adam C. Smith

Adam C. Smith

Adam C. Smith is an assistant professor of economics and director of the Center for Free Market Studies at Johnson & Wales University. He is also a visiting scholar with the Regulatory Studies Center at George Washington University and coauthor of the forthcoming Bootleggers and Baptists: How Economic Forces and Moral Persuasion Interact to Shape Regulatory Politics.

Stewart Dompe

Stewart Dompe

Stewart Dompe is an instructor of economics at Johnson & Wales University. He has published articles in Econ Journal Watch and is a contributor to Homer Economicus: Using The Simpsons to Teach Economics.

Everyone Is Talking about Bitcoin by Jeffrey A. Tucker

I’m getting a flurry of messages: how do I buy Bitcoin? What’s the best article explaining this stuff? How to answer the critics? (Might try here, here, here, and here.)

Markets can be unpredictable. But the way people talk about markets is all too predictable.

When financial assets go up in price, they become the topic of conversation. When they go way up in price, people feel an itch to buy. When they soar to the moon, people jump in the markets — and ride the price all the way back down.

Then while the assets are out of the news, they disappear from the business pages and only the savviest investors buy. Then they ride the wave up.

This is why smart money wins and dumb money loses.

Bitcoin Bubbles and Busts

It’s been this way for seven years with Bitcoin. When the dollar exchange rate is falling, people get bored or even disgusted. When it is rising, people get interested and excited. The challenge of Bitcoin is to see through the waves of hysteria and despair to take a longer view.

In the end, Bitcoin is not really about the dollar exchange rate. It is about its use as a technology. If Bitcoin were only worth a fraction of a penny, the concept would already be proven. It demonstrates that money can be a digital product, created not by government or central banks but rather through the same kind of ingenuity that has already transformed the world since the advent of the digital age.

When the Bitcoin white paper came out in October 2008, only a few were interested. Five years would pass before discussion of the idea even approached the mainstream. Now we see the world’s largest and most heavily capitalized banks, payment processing companies, and venture capitalists working to incorporate Bitcoin’s distributed ledger into their operations.

In between then and now, we’ve seen wild swings of opinion among the chattering classes. When Bitcoin hit $30 in February 2013, people were screaming that it was a Ponzi-like bubble destined to collapse. I’ve yet to see a single mea culpa post from any of these radical skeptics. It’s interesting how the incessantly wrong slink away, making as little noise as possible.

For the last year, the exchange rate hovered around $250, but because this was down from its high, people lost interest. What is considered low and what is considered high are based not on fundamentals but on the direction of change.

What Is the Right BTC Price?

The recent history of cryptocurrency should have taught this lesson: No one knows the right exchange rate for Bitcoin. That is something to be discovered in the course of market trading. There is no final answer. The progress of technology and the shaping of economic value knows no end.

On its seventh birthday, Bitcoin broke from its hiatus and has spiked to over $350, on its way to $400. And so, of course, it is back in the news. Everyone wants to know the source of the last price run up. There is speculation that it is being driven by demand from China, where bad economic news keeps rolling in. There has also been a new wave of funding for Bitcoin enterprises, plus an awesome cover story in the Economist magazine.

Whatever the reason, this much is increasingly clear: Bitcoin is perhaps the most promising innovation of our lifetimes, one that points to a future of commodified, immutable, and universal information exchange. It could not only revolutionize contracting and titling. It could become a global currency that operates outside the nation state and banking structures as we’ve known them for 500 years. It could break the model of money monopolization that has been in operation for thousands of years.

Technology in Fits and Starts

Those of us in the Bitcoin space, aware of the sheer awesomeness of the technology, can grow impatient, waiting for history to catch up to technical reality. We are daily reminded that technology does not descend on the world on a cloud in its perfected form, ready for use by the consuming public. It arrives in fits and starts, is subjected to trials and improvement, and its applications tested against real world conditions. It passes from hand to hand in succession, with unpredictable winners and losers.

Successful technology does not become socially useful in the laboratory. Market experience combined with entrepreneurial risk are the means by which ideas come to make a difference in the world at large.

Bitcoin was not created in the monetary labs of the Federal Reserve or banks or universities. It emerged from a world of cypherpunks posting on private email lists — people not even using their own names.

In that sense, Bitcoin had every disadvantage: No funding, no status, no official endorsements, no big-name boosters. It has faced an ongoing flogging by bigshots. It’s been regulated and suppressed by governments. It’s been hammered constantly by scammers, laughed at by experts, and denounced by moralists for seven straight years.

And yet, even given all of this, it has persisted solely on its own merits. It is the ultimate “antifragile” technology, growing stronger in the face of every challenge.

What will be the main source of Bitcoin’s breakout into the mainstream? Commentary trends suggest it will be international remittances. It is incredible that moving money across national borders is as difficult and expensive as it is. With Bitcoin, you remove almost all time delays and transaction costs. So it is not surprising that this is a huge potential growth area for Bitcoin.

The Economist takes a different direction. It speculates that Bitcoin technology will be mostly useful as a record-keeping device. It is “a machine for creating trust.”

One idea, for example, is to make cheap, tamper-proof public databases — land registries, say, (Honduras and Greece are interested); or registers of the ownership of luxury goods or works of art. Documents can be notarised by embedding information about them into a public blockchain — and you will no longer need a notary to vouch for them.

Financial-services firms are contemplating using blockchains as a record of who owns what instead of having a series of internal ledgers. A trusted private ledger removes the need for reconciling each transaction with a counterparty, it is fast and it minimises errors.

We Need Bitcoin 

No one knows for sure. What we do know is that we desperately need this as a tool to disintermediate the world, liberating us from the governments that have come to stand between individuals and the realization of their dreams.

In 1974, F.A. Hayek dreamed of a global currency that operated outside governments and central banks. If governments aren’t going to reform money, markets would need to step up and do it themselves. Bitcoin is the most successful experiment in this direction we’ve yet seen.

And that is true whether or not your friends and neighbors are talking about it.

Jeffrey A. Tucker

Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.

New York’s Taxi Cartel Is Collapsing — Now They Want a Bailout! by Jeffrey A. Tucker

An age-old rap against free markets is that they give rise to monopolies that use their power to exploit consumers, crush upstarts, and stifle innovation. It was this perception that led to “trust busting” a century ago, and continues to drive the monopoly-hunting policy at the Federal Trade Commission and the Justice Department.

But if you look around at the real world, you find something different. The actually existing monopolies that do these bad things are created not by markets but by government policy. Think of sectors like education, mail, courts, money, or municipal taxis, and you find a reality that is the opposite of the caricature: public policy creates monopolies while markets bust them.

For generations, economists and some political figures have been trying to bring competition to these sectors, but with limited success. The case of taxis makes the point. There is no way to justify the policies that keep these cartels protected. And yet they persist — or, at least, they have persisted until very recently.

In New York, we are seeing a collapse as inexorable as the fall of the Soviet Union itself. The app economy introduced competition in a surreptitious way. It invited people to sign up to drive people here and there and get paid for it. No more standing in lines on corners or being forced to split fares. You can stay in the coffee shop until you are notified that your car is there.

In less than one year, we’ve seen the astonishing effects. Not only has the price of taxi medallions fallen dramatically from a peak of $1 million, it’s not even clear that there is a market remaining at all for these permits. There hasn’t been a single medallion sale in four months. They are on the verge of becoming scrap metal or collector’s items destined for eBay.

What economists, politicians, lobbyists, writers, and agitators failed to accomplished for many decades, a clever innovation has achieved in just a few years of pushing. No one on the planet could have predicted this collapse just five years ago. Now it is a living fact.

Reason TV does a fantastic job and covering what’s going on with taxis in New York. Now if this model can be applied to all other government-created monopolies, we might see genuine progress toward a truly competitive economy. After all, it turns out that the free market is the best anti-monopoly weapon ever developed.

Jeffrey A. Tucker
Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.

Video Game Developers Face the Final Boss: The FDA by Aaron Tao

As I drove to work the other day, I heard a very interesting segment on NPR that featured a startup designing video games to improve cognitive skills and relieve symptoms associated with a myriad of mental health conditions.

One game, Project Evo, has shown good preliminary results in training players to ignore distractions and stay focused on the task at hand:

“We’ve been through eight or nine completed clinical trials, in all cognitive disorders: ADHD, autism, depression,” says Matt Omernick, executive creative director at Akili, the Northern California startup that’s developing the game.

Omernick worked at Lucas Arts for years, making Star Wars games, where players attack their enemies with light sabers. Now, he’s working on Project Evo. It’s a total switch in mission, from dreaming up best-sellers for the commercial market to designing games to treat mental health conditions.

“The qualities of a good video game, things that hook you, what makes the brain — snap — engage and go, could be a perfect vessel for actually delivering medicine,” he says.

In fact, the creators believe their game will be so effective it might one day reduce or replace the drugs kids take for ADHD.

This all sounds very promising.

In recent years, many observers (myself included) have expressed deep concerns that we are living in the “medication generation,” as defined by the rapidly increasing numbers of young people (which seems to have extended to toddlers and infants!) taking psychotropic drugs.

As experts and laypersons continue to debate the long-term effects of these substances, the news of intrepid entrepreneurs creating non-pharmaceutical alternatives to treat mental health problems is definitely a welcome development.

But a formidable final boss stands in the way:

[B]efore they can deliver their game to players, they first have to go through the Food and Drug Administration — the FDA.

The NPR story goes on to detail on how navigating the FDA’s bureaucratic labyrinth is akin to the long-grinding campaign required to clear the final dungeon from any Legend of Zelda game. Pharmaceutical companies are intimately familiar with the FDA’s slow and expensive approval process for new drugs, and for this reason, it should come as no surprise that Silicon Valley companies do their best to avoid government regulation. One venture capitalist goes so far as to say, “If it says ‘FDA approval needed’ in the business plan, I myself scream in fear and run away.”

Dynamic, nimble startups are much more in tune with market conditions than the ever-growing regulatory behemoth that is defined by procedure, conformity, and irresponsibility. As a result, conflict between these two worlds is inevitable:

Most startups can bring a new video game to market in six months. Going through the FDA approval process for medical devices could take three or four years — and cost millions of dollars.

In the tech world, where app updates and software patches are part of every company’s daily routine just to keep up with consumer habits, technology can become outdated in the blink of an eye. Regulatory hold on a product can spell a death sentence for any startup seeking to stay ahead of its fierce market competition.

Akili is the latest victim to get caught in the tendrils of the administrative state, and worst of all, in the FDA, which distinguished political economist Robert Higgs has described as “one of the most powerful of federal regulatory agencies, if not the most powerful.” The agency’s awesome authority extends to over twenty-five percent of all consumer goods in the United States and thus “routinely makes decisions that seal the fates of millions.”

Despite its perceived image as the nation’s benevolent guardian of health and well-being, the FDA’s actual track record is anything but, and its failures have been extensively documented in a vast economic literature.

The “knowledge problem” has foiled the whims of central planners and social engineers in every setting, and the FDA is not immune. By taking a one-sized-fits-all approach in enacting regulatory policy, it fails to take into account the individual preferences, social circumstances, and physiological attributes of the people that compose a diverse society.

For example, people vary widely in their responses to drugs, depending on variables that range from dosage to genetic makeup. In a field as complex as human health, an institution forcing its way on a population is bound to cause problems (for a particularly egregious example, see what happened with the field of nutrition).

The thalidomide tragedy of the 1960s is usually cited as to why we need a centralized, regulatory agency staffed by altruistic public servants to keep the market from being flooded by toxins, snake oils, and other harmful substances. However, this needs to be weighed against the costs of keeping beneficial products withheld.

For example, the FDA’s delay of beta blockers, which were widely available in Europe to reduce heart attacks, was estimated to have cost tens of thousands of lives. Despite this infamous episode and other repeated failures, the agency cannot overcome the institutional incentives it faces as a government bureaucracy. These factors strongly skew its officials towards avoiding risk and getting blamed for visible harm. Here’s how the late Milton Friedman summarized the dilemma with his usual wit and eloquence:

Put yourself in the position of a FDA bureaucrat considering whether to approve a new, proposed drug. There are two kinds of mistakes you can make from the point of view of the public interest. You can make the mistake of approving a drug that turns out to have very harmful side effects. That’s one mistake. That will harm the public. Or you can make the mistake of not approving a drug that would have very beneficial effects. That’s also harmful to the public.

If you’re such a bureaucrat, what’s going to be the effect on you of those two mistakes? If you make a mistake and approve a product that has harmful side effects, you are a devil incarnate. Your misdeed will be spread on the front page of every newspaper. Your name will be mud. You will get the blame. If you fail to approve a drug that might save lives, the people who would object to that are mostly going to be dead. You’re not going to hear from them.

Critics of America’s dysfunctional healthcare system have pointed out the significant role of third-party spending in driving up prices, and how federal and state regulations have created perverse incentives and suppressed the functioning of normal market forces.

In regard to government restrictions on the supply of medical goods, the FDA deserves special blame for driving up the costs of drugsslowing innovation, and denying treatment to the terminally ill while demonstrating no competency in product safety.

Going back to the NPR story, a Pfizer representative was quoted in saying that “game designers should go through the same FDA tests and trials as drug manufacturers.”

Those familiar with the well-known phenomenon of regulatory capture and the basics of public choice theory should not be surprised by this attitude. Existing industries, with their legions of lobbyists, come to dominate the regulatory apparatus and learn to manipulate the system to their advantage, at the expense of new entrants.

Akili and other startups hoping to challenge the status quo would have to run past the gauntlet set up by the “complex leviathan of interdependent cartels” that makes up the American healthcare system. I can only wish them the best, and hope Schumpeterian creative destruction eventually sweeps the whole field of medicine.

Abolishing the FDA and eliminating its too-often abused power to withhold innovative medical treatments from patients and providers would be one step toward genuine healthcare reform.

A version of this post first appeared at The Beacon.

Aaron Tao
Aaron Tao

Aaron Tao is the Marketing Coordinator and Assistant Editor of The Beacon at the Independent Institute. Follow him on Twitter here.

Will Robots Put Everyone Out of Work? by Sandy Ikeda

Will workplace automation make the rich richer and doom the poor?

That could happen soon, warns Paul Solman, economics correspondent for PBS NewsHour. He’s talking to Jerry Kaplan, author of a new book that seems to combine Luddism with fears about inequality.

PAUL SOLMAN: And the age-old fear of displaced workers, says Kaplan, is finally, irrevocably upon us.

JERRY KAPLAN: What happens to people who simply can’t acquire or don’t have the skills that are going to be needed in the new economy?

PAUL SOLMAN: Well, what is going to happen to them?

JERRY KAPLAN: We’re going to see much worse income inequality. And unless we take some humanitarian actions, the truth is, they’re going to starve and live in poverty and then die.

PAUL SOLMAN: Kaplan offers that grim prognosis in a new book, Humans Need Not Apply. He knows, of course, that automation has been replacing labor for 200 years or more, for decades, eliminating relatively high-paying factory jobs in America, and that new jobs have more than kept pace, but not anymore, he says.

I haven’t read Kaplan’s book, but you can get a sense of the issue from this video.

The  fear is that, unlike the past when displaced workers could learn new skills for a different industry, advanced “thinking machines” will soon fill even highly skilled positions, making it that much harder to find a job that pays a decent wage. And while the Luddite argument assumes that the number of jobs in an economy is fixed, the fear now is that whatever jobs may be created will simply be filled by even smarter machines.

This new spin sounds different, but it’s essentially the same old Luddite fallacy on two levels. First, while it’s true that machinery frequently substitutes for labor in the short term, automation tends to complement labor in the long term; and, second, the primary purpose of markets is not to create jobs per se, it is to create successful ventures by satisfying human wants and needs.

While I understand that Kaplan offers some market-oriented solutions, the mainstream media has emphasized the more alarmist aspects of his thesis. The Solmans of the world would like the government to respond with regulations to slow or prevent the introduction of artificial intelligence — or to at least subsidize the kind of major labor-force adjustments that such changes appear to demand.

Short-Term Substitutes, Long-Term Complements

Fortunately, Henry Hazlitt long ago worked out in a clear, careful, and sympathetic way the consequences of innovations on employment in his classic book, Economics in One Lesson. Here’s a brief outline of the chapter relevant to our discussion, “The Curse of Machinery”:

(As Hazlitt notes, not all innovations are “labor-saving.” Many simply improve the quality of output, but let’s put that to one side. Let’s also put aside the very real problem that raising the minimum wage will artificially accelerate the trend toward automation.)

Suppose a person who owns a coat-making business invests in a new machine that makes the same number of coats with half the workers. (Assume for now that all employees work eight-hour days and earn the going wage.) What’s easy to see is that, say, 50 people are laid off; what’s harder to see is that other people will be hired to build that new machine. If the new machine does reduce the business’s cost, however, then presumably it takes fewer than 50 people to build it. If it takes, say, 30 people, there still appears to be a net loss of 20 jobs overall.

But the story doesn’t end there. Assuming the owner doesn’t lower her price for the coats she sells, Hazlitt notes that there are three things she can do with the resulting profit. She can use it to invest in her own business, to invest in some other business, or to spend on consumption goods for herself and others. Whichever she does means more production and thus more employment elsewhere.

Moreover, competition in the coat industry will likely lead her rivals to adopt the labor-saving machinery and to produce more coats. Buying more machines means more employment in the machine-making industry, and producing more coats will, other things equal, lower the price of coats.

Now, buying more machines will probably mean she has to hire more workers to operate or maintain them, and lower coat prices mean that consumers will have more disposable income to spend on goods in general, including coats.

The overall effect is to increase the demand for labor and the number of jobs, which conforms to our historical experience in many industries. So, if all you see are the 50 people initially laid off, well, you’ve missed most of the story.

Despite claims to the contrary, it’s really no different in the case of artificial intelligence.

Machines might substitute for labor in the short term, but in the long term they complement labor and increase its productivity. Yes, new machines used in production will be more sophisticated and do more things than the old ones, but that shouldn’t be surprising; that’s what new machines have done throughout history.

And as I’ve written before in “The Breezes of Creative Destruction,” it usually takes several years for an innovation — even something as currently ubiquitous as smartphones — to permeate an economy. (I would guess that we each could name several people who don’t own one.) This gives people time to adjust by moving, learning new skills, and making new connections. Hazlitt recognizes that not everyone will adjust fully to the new situation, perhaps because of age or disability. He responds,

It is altogether proper — it is, in fact, essential to a full understanding of the problem — that the plight of these groups be recognized, that they be dealt with sympathetically, and that we try to see whether some of the gains from this specialized progress cannot be used to help the victims find a productive role elsewhere.

I’m pretty sure Hazlitt means that voluntary, noncoercive actions and organizations should take the lead in filling this compassionate role.

In any case, what works at the level of a single industry also works across all industries. The same processes that Hazlitt describes will operate as long as markets are left free to adjust. Using government intervention to deliberately stifle change may save the jobs we see, but it will destroy the many more jobs that we don’t see — and worse.

More Jobs, Less Work, Greater Well-Being

Being able to contribute to making one’s own living is probably essential to human happiness. And economic development has indeed meant that we’ve been spending less time working.

Although it’s hard to calculate accurately how many hours per week our ancestors worked — and some claim that people in preindustrial society had more leisure time than industrial workers — the best estimate is that the work week in the United States fell from about 70 hours in 1850 to about 40 hours today. Has this been a bad thing? Has working less led to human misery? Given the track record of relatively free markets, that’s a strange question to ask.

Take, for example, this video by Swedish doctor Hans Rosling about his mother’s washing machine. It’s a wonderful explanation of how this particular machine, sophisticated for its day, enabled his mother to read to him, which helped him to then become a successful scientist.

I had lunch with someone who was recently laid off and whose husband has a fulfilling but low-paying job. Despite this relatively low family income, she was able to fly to New York for a weekend to attend a U2 concert, take a class at an upscale yoga studio in Manhattan, and share a vegan lunch with an old friend. Our grandparents would have been dumbfounded!

As British journalist Matt Ridley puts it in his book The Rational Optimist,

Innovation changes the world but only because it aids the elaboration of the division of labor and encourages the division of time. Forget wars, religions, famines and poems for the moment. This is history’s greatest theme: the metastasis of exchange, specialization and the invention it has called forth, the “creation” of time.

The great accomplishment of the free market is not that it creates jobs (which it does) but that it gives us the time to promote our well-being and to accomplish things no one thought possible.

If using robots raises the productivity of labor, increases output, and expands the amount, quality, and variety of goods each of us can consume — and also lowers the hours we have to work — what’s wrong with that? What’s wrong with working less and having the time to promote the well-being of ourselves and of others?

In a system where people are free to innovate and to adjust to innovation, there will always be enough jobs for whoever wants one; we just won’t need to work as hard in them.

Sandy Ikeda
Sandy Ikeda

Sandy Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism.

Environmental Doom-mongering and the Myth of Vanishing Resources by Chelsea German

Media outlets ranging from Newsweek and Time, to National Geographic and even the Weather Channel, all recently ran articles on the so-called “Overshoot Day,” which is defined by its official website as the day of the year

When humanity’s annual demand for the goods and services that our land and seas can provide — fruits and vegetables, meat, fish, wood, cotton for clothing, and carbon dioxide absorption — exceeds what Earth’s ecosystems can renew in a year.

This year, the world allegedly reached the Overshoot Day on August 13th. Overshoot Day’s proponents claim that, having used up our ecological “budget” for the year and entered into “deficit spending,” all consumption after August 13th is unsustainable.

Let’s look at the data concerning resources that, according to Overshoot Day’s definition, we are consuming unsustainably. (We’ll leave aside carbon dioxide absorption — as that issue is more complex — and focus on all the other resources).

Fruits and vegetables

Since millions of people rose from extreme poverty and starvation over the past few decades, the world is consuming more fruits and vegetables than before. We are also producing more fruits and vegetables per person than before. That is, partly, because of increasing yields, which allow us to extract more food from less land. Consider vegetable yields:

Meat and fish

As people in developing countries grow richer, they consume more protein (i.e., meat). The supply of meat and fish per person is rising to meet the increased demand, just as with fruits and vegetables. Overall dietary supply adequacy is, therefore, increasing.

Wood

It is true that the world is losing forest area, but there is cause for optimism. The United States has more forest area today than it did in 1990.

As Ronald Bailey says in his new book The End of Doom, “In fact, except in the cases of India and Brazil, globally the forests of the world have increased by about 2 percent since 1990.”

As the people of India and Brazil grow wealthier and as new forest-sparing technologies spread, those countries will likely follow suit. To quote Jesse H. Ausubel:

Fortunately, the twentieth century witnessed the start of a “Great Restoration” of the world’s forests. Efficient farmers and foresters are learning to spare forestland by growing more food and fiber in ever-smaller areas. Meanwhile, increased use of metals, plastics, and electricity has eased the need for timber. And recycling has cut the amount of virgin wood pulped into paper.

Although the size and wealth of the human population has shot up, the area of farm and forestland that must be dedicated to feed, heat, and house this population is shrinking. Slowly, trees can return to the liberated land.

Cotton

Cotton yields are also increasing — as is the case with so many other crops. Not only does this mean that we will not “run out” of cotton (as the Overshoot Day proponents might have you believe), but it also means consumers can buy cheaper clothing.

Please consider the graph below, showing U.S. cotton yields rising and cotton prices falling.

While it is true that humankind is consuming more, innovations such as GMOs and synthetic fertilizers are also allowing us to produce more. Predictions of natural resource depletion are not new.

Consider the famous bet between the environmentalist Paul Ehrlich and economist Julian Simon: Ehrlich bet that the prices of five essential metals would rise as the metals became scarcer, exhausted by the needs of a growing population. Simon bet that human ingenuity would rise to the challenge of growing demand, and that the metals would decrease in price over time. Simon and human ingenuity won in the end. (Later, the prices of many metals and minerals did increase, as rapidly developing countries drove up demand, but those prices are starting to come back down again).

To date, humankind has never exhausted a single natural resource. To learn more about why predictions of doom are often exaggerated, consider watching Cato’s recent book forum, The End of Doom.

A version of this post first appeared at Cato.org.

Chelsea German

Chelsea German

Chelsea German works at the Cato Institute as a Researcher and Managing Editor of HumanProgress.org.

RELATED ARTICLE: EPA’s Hightest Paid Employee, “Climate Change Expert,” Sentenced to 32 Months for Fraud, Says Lying Was a ‘Rush’

How Minimum Wages Discourage Entrepreneurship by Donald J. Boudreaux

In a letter to the Wall Street Journal, Brian Collins asks, “Do you truly believe that absent any increase in the minimum wage that Wendy’s or any other business will suspend efforts to develop and implement new forms of automation that promise to reduce staff levels?”

The answer is “no.” Contrary to Mr. Collins’s implication, however, this fact does nothing to excuse raising the minimum wage.

Even in a world in which market forces naturally promote automation, raising the minimum wage has two pernicious effects.

First, it causes the rate of automation to be faster than it would be if the minimum wage were not raised. That is, raising the minimum wage results in automation being introduced at a rate that is too fast given the size of the low-skilled labor force.

Second, raising the minimum wage destroys incentives for entrepreneurs and businesses to find ways to profitably employ workers whose limited skills prevent them from producing hourly outputs valued at least as high as the minimum wage.

The first effect throws some low-skilled workers out of jobs that they would otherwise retain, while the second effect ensures that no one has incentives to find ways to profitably employ these and other low-skilled workers.

If it is inhumane to outlaw the profitable employment of those workers whose skills are the least valuable, then the minimum wage is deeply inhumane.

If the government instituted a minimum wage of $100 per hour and, therefore, made unlawful the profitable employment of all those people whose skills are too meager to enable them to produce at least $100 worth of output per hour, there would be a national uproar — and rightly so.

Yet when the government implements such a policy but in a way that outlaws the profitable employment only of people whose skill-sets are among thelowest, relatively few people object and many people — especially “Progressives” — applaud the policy as humane.

How sad. And how especially sad that many economists today, who above all should know better, lend their authority to such an inhumane policy.

A version of this letter first appeared at Café Hayek.

Donald J. Boudreaux
Donald J. Boudreaux

Donald Boudreaux is a professor of economics at George Mason University, a former FEE president, and the author of Hypocrites and Half-Wits.

Capitalists from Outer Space by B.K. Marcus

When the aliens stop trifling with crop circles, bumpkin abduction, and indelicate probes and finally introduce themselves to the rest of humanity, will they turn out to be partisans of central planning, interventionism, or unhampered markets?

This is not the question asked by the Search for Extraterrestrial Intelligence (SETI) Institute, but whether or not the institute’s scientists realize it, the answer is crucial to their search.

Signs of Intelligent Life

The SETI Institute was founded by Frank Drake and the late Carl Sagan. Its scientists do not believe we have been visited yet. UFO sightings and abduction stories don’t stand up under scientific scrutiny, they say. Nor are they waiting for flying saucers. Because the aliens’ signals will likely reach Earth before their spaceships do, SETI monitors the skies for transmissions from advanced civilizations orbiting distant stars.

The scientific search for evidence of advanced alien societies began in 1960, when Drake aimed a 25-meter dish at two nearby stars. The previous year, the journal Nature had published an article called “Searching for Interstellar Communications,” which suggested that distant civilizations might transmit greetings at the same wavelength as the radio emission of hydrogen (the universe’s most common element). Drake found no such signals, nor has SETI found any evidence of interstellar salutations since. But it’s not giving up.

The Truth Is Out There

Before we can ask after advanced alien political economy, we must confront the more basic question: Is there anybody out there? SETI has been searching for over half a century. That may seem like a long time, but there are, as Sagan underscored, “billions and billions of stars.” How many of them should we expect to monitor before finding one that’s transmitting?

In an attempt to address, if not answer, the question, Drake proposed an equation in 1961 to summarize the concepts scientists think are relevant to any educated guess.

Here is how Sagan explains the Drake equation in the book Cosmos:

N*, the number of stars in the Milky Way Galaxy;
fp, the fraction of stars that have planetary systems;
ne, the number of planets in a given system that are ecologically suitable for life;
fl, the fraction of otherwise suitable planets on which life actually arises;
fi, the fraction of inhabited planets on which an intelligent form of life evolves;
fc, the fraction of planets inhabited by intelligent beings on which a communicative technical civilization develops;
and fL, the fraction of a planetary lifetime graced by a technical civilization.

The End of the World as We Know It

Sagan expounds on all the terms in the equation, but it’s that last one that absorbs him: How long can an advanced civilization last before it destroys itself?

Perhaps civilizations arise repeatedly, inexorably, on innumerable planets in the Milky Way, but are generally unstable; so all but a tiny fraction are unable to survive their technology and succumb to greed and ignorance, pollution and nuclear war.

Sagan wrote Cosmos toward the end of the Cold War. He mentioned other threats — greed, ignorance, pollution — but the specter of mutual annihilation haunted him. When he imagined the end of an advanced society, he pictured something permanent.

“It is hardly out of the question,” he wrote, “that we might destroy ourselves tomorrow.” Perhaps, Sagan feared, the general pattern is for civilizations to “take billions of years of tortuous evolution to arise, and then snuff themselves out in an instant of unforgivable neglect.”

The Rise and Fall of Civilization

We cannot know if the civilizational survival rate on other planets is high or low, and so the final term in the Drake equation is guesswork, but some guesses are better than others.

“One of the great virtues of [Drake’s] equation,” Sagan wrote, “is that it involves subjects ranging from stellar and planetary astronomy to organic chemistry, evolutionary biology, history, politics and abnormal psychology.”

That’s quite an array of topics to inform an educated guess, but notice that he doesn’t mention economics.

Perhaps he thought politics covered it, but Sagan’s political focus was more on questions of war and peace than poverty and wealth. In particular, he considered the end of civilization to be an event from which it would take a planet billions of years to recover.

The history of our own species suggests that this view is too narrow. Yes, a nuclear war could wipe out humanity, but civilizations do destroy themselves in less permanent ways.

There have been two dark ages in Western history: the Mycenaean-Greek and the post-Roman. Both were marked by retrogression in technology, art, and literacy. Both saw a drop in overall population and in population density, as survivors left towns and cities for a more autarkic existence in the countryside. And both underwent a radical decline in foreign trade and the division of labor. Market societies deteriorated into disparate cultures of subsistence farming.

The ultimate causes of the Greek Dark Age are a mystery. As with the later fall of the Roman Empire, the Mycenaean demise was marked by “barbarian” invasions, but the hungry hoards weren’t new: successful invasions depend on weakened defenses and deteriorating infrastructure. What we know is that worsening poverty marked the fall, whether as cause, effect, or both.

The reasons for the fall of the Roman West are more evident, if still debated. Despite claims of lead poisoning, poor sanitation, too much religion, too little religion, and even, believe it or not, inadequate central planning, the empire’s decline resulted from bad economic policy.

To help us see this more clearly,Freeman writer Nicholas Davidson suggests in his magnificent 1987 article “The Ancient Suicide of the West” that we look to the signs of cultural and economic decline rather than to the changes, however drastic, in political leadership. While the Western empire did not fall to the barbarians until the fifth century AD, “The Roman economy [had] reached its peak toward the middle of the first century AD and thereafter began to decline.” As with the Mycenaean Greeks, the decay was evident in art and literature, science and technology. Civilization cannot advance in poverty. Wealth and civilization progress together.

How to Kill Progress

“The stagnation in all aspects of society,” Davidson writes, “was associated with a continuous extension of governmental functions. Social engineering was tried on the grand scale. The state relentlessly expanded into commerce, industry, and private life.”

As we look to our own future — or anticipate the politics of our alien brethren — we can draw on the experience of humanity’s past to help us appreciate the economics of progress and decline. Over and over, we see the same pattern: some group gains a temporary benefit from a world in flux. When further social and economic changes check those advantages, the old guard turn to the state to protect them from the dynamism of a healthy society. Adaptation is stymied. Nothing is allowed to evolve. The politically privileged — military and civilian, rich and poor — sacrifice their civilization in an doomed attempt to ward off change.

The Sustainable Society

Evolutionary science, economic theory, and cybernetics yield the same lessons: stability requires flexibility; complexity flourishes under spontaneous order; centralization leads to stagnation.

To those general lessons, economics adds insights specific to the context of scarcity: private property and voluntary exchange produce greater general wealth, longer time horizons, and ever more investment in the “luxuries” of scientific investigation, technological innovation, and a more active stewardship of the environment. Trade promotes peace, and a global division of labor unites the world’s cultures in mutual self-interest.

If, as Sagan contends, an advanced civilization would require political stability and sizable long-term investment in science and technology to survive an interstellar spacefaring phase, then we should expect any such civilization to embrace a planetwide system of free trade and free markets grounded in private property. For the civilization to last the centuries and millennia necessary to explore and colonize the stars, its governing institutions will have to be minimal and decentralized.

The aliens will, in short, embrace what Adam Smith called “the system of natural liberty.” Behind their transmissions, SETI should expect to find the invisible hand.

Scientists versus Freedom

When we do make contact, “the consequences for our own civilization will be stunning,” Sagan wrote. Humanity will gain “insights on alien science and technology, art, music, politics, ethics, philosophy and religion…. We will know what else is possible.”

What did Sagan himself believe possible? Had he survived to witness first contact, would he be surprised to learn of the capitalist political economy at the foundation of an advanced extraterrestrial civilization?

Neil deGrasse Tyson, who remade the Cosmos television series for the 21st century, recommends reading Adam Smith’s Wealth of Nations but only “to learn that capitalism is an economy of greed, a force of nature unto itself.”

We shouldn’t assume that Tyson represents Sagan’s economic views, but when Sagan did address questions of policy, he advocated a larger welfare state and greater government spending. When he talked about “us” and “our” responsibilities, he invariably meant governments, not private individuals.

Sagan wrote, “It may be that civilizations can be divided into two great categories: one in which the scientists are unable to convince nonscientists to authorize a search for extraplanetary intelligence … and another category in which the grand vision of contact with other civilizations is shared widely.”

Why would scientists have to persuade anyone else to authorize anything? Sagan could only imagine science funded by government. It was apparently beyond credibility that less widely shared visions can secure sufficient funding.

It’s a safe guess, then, that when he talks of civilizations that are “unable to survive their technology and succumb to greed,” Sagan is talking about the profit motive.

And yet, it is the profit motive that drives innovation, and it is the great wealth generated by profit seekers that allows later generations of innovators to pursue their visions with fewer financial inducements. Whether directly or indirectly, profits pay for progress.

Self-Interested Enlightenment

Why does it matter if astronomers misunderstand the market? Does SETI really need to appreciate the virtues of individual liberty to monitor the heavens for signs of intelligent life?

Scientists can and do excel in their fields without understanding how society works. But that doesn’t mean their ignorance of economics is harmless. The more admired they are as scientists — especially as popularizers of science — the more damage they can do when they speak authoritatively outside their fields. Their brilliance in one discipline can make them overconfident about their grasp of others. And increasingly, the questions facing the scientific community cross multiple specialties. It was the cross-disciplinary nature of Drake’s equation that Sagan saw as its great virtue.

The predictions of the astronomer looking for extraterrestrial socialists will be different from those of someone who expects the first signals of alien origin to come from a radically decentralized civilization — a society of private individuals who have discovered the sustainable harmony of self-interest and the general welfare.

After that first contact, after we’ve gained “insights on alien science and technology” and we get around to learning alien history, will we discover that their species has witnessed civilizations rise and fall? What was it that finally allowed them to break the cycle? How did they avoid stagnation, decline, and self-destruction?

How did they, as a culture, come to accept the economic way of thinking, embrace the philosophy of freedom, and develop a sustainable civilization capable of reaching out to us, the denizens of a less developed world?


B.K. Marcus

B.K. Marcus is managing editor of the Freeman.