Posts

Why We Need to Make Mistakes: Innovation Is Better than Efficiency by Sandy Ikeda

“I think it is only because capitalism has proved so enormously more efficient than alternative methods that is has survived at all,” Milton Friedman told economist Randall E. Parker for Parker’s 2002 book, Reflections on the Great Depression.

But I think innovation, not efficiency, is capitalism’s greatest strength. I’m not saying that the free market can’t be both efficient and innovative, but it does offer people a strong incentive to abandon the pursuit of efficiency in favor of innovation.

What Is Efficiency?

In its simplest form, economic efficiency is about given ends and given means. Economic efficiency requires that you know what end, among all possible ends, is the most worthwhile for you to pursue and what means to use, among all available means, to attain that end. You’re being efficient when you’re getting the highest possible benefit from an activity at the lowest possible cost. That’s a pretty heavy requirement.

Being inefficient, then, implies that for a given end, the benefit you get from that end is less than the cost of the means you use to achieve it. Or, as my great professor, Israel Kirzner, puts it, If you want to go uptown, don’t take the downtown train.

What Is Innovation?

Innovation means doing something significantly novel. It could be doing an existing process in a brand new way, such as being the first to use a GPS tracking system in your fleet of taxis. Or, innovation could mean doing something that no one has ever done before, such as using smartphone technology to match car owners with spare time to carless people who need to get somewhere in a hurry, à la Uber.

Innovation, unlike efficiency, entails discovering novel means to achieve a given end, or discovering an entirely new end. And unlike efficiency, in which you already know about all possible ends and means, innovation takes place onlywhen you lack knowledge of all means, all ends, or both.

Sometimes we mistakenly say someone is efficient when she discovers a new way to get from home to work. But that’s not efficiency; that’s innovation. And a person who copies her in order to reduce his commute time is not an innovator — but he is being efficient. The difference hinges on whether you’re creating new knowledge.

Where’s the Conflict?

Starting a business that hasn’t been tried before involves a lot of trial and error. Most of the time the trials, no matter how well thought out, turn out to contain errors. The errors may lie in the means you use or in the particular end you’re pursuing.

In most cases, it takes quite a few trials and many, many errors before you hit on an outcome that has a high enough value and low enough costs to make the enterprise profitable.) Is that process of trial and error, of experimentation, an example of economic efficiency? It is not.

If you begin with an accurate idea both of the value of an end and of all the possible ways of achieving that end, then you don’t need to experiment. Spending resources on trial and error would be wasteful. It’s then a matter of execution, which isn’t easy, but the real heavy lifting in the market process, both from the suppliers’ and the consumers’ sides, is done by trying out new things — and often failing.

Experimentation is messy and apparently wasteful, whether in science or in business. You do it precisely because you’re not sure how to answer a particular question, or because you’re not even sure what the right question is. There are so many failures. But in a world where our knowledge is imperfect, which is the world we actually live in, most of what we have to do in everyday life is to innovate — to discover things we didn’t know we didn’t know — rather than trying to be efficient. Being willing to suffer failure is the only way to make discoveries and to introduce innovations into the world.

Strictly speaking, then, if you want to innovate, being messy is unavoidable, and messiness is not efficient. Yet, if you want to increase efficiency, you can’t be messy. Innovation and efficiency usually trade off for each other because if you’re focused on doing the same thing better and better, you’re taking time and energy away from trying to do something new.

Dynamic Efficiency?

Some have tried to describe this process of innovation as “dynamic efficiency.” It may be quibbling over words, but I think trying to salvage the concept of efficiency in this way confuses more than it clarifies. To combine efficiency and innovation is to misunderstand the essential meanings of those words.

What would it mean to innovate efficiently? I suppose it would mean something like “innovating at least cost.” But how is it possible to know, before you’ve actually created a successful innovation, whether you’ve done it at least cost? You might look back and say, “Gee, I wouldn’t have run experiments A, B, and C if only I’d known that D would give me the answer!” But the only way to know that D is the right answer is to first discover, through experimentation and failure, that A, B, and C are the wrong answers.

Both efficiency and innovation best take place in a free market. But the greatest rewards to buyers and sellers come not from efficiency, but from innovation.

Sandy IkedaSandy Ikeda

Sandy Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.

Tech Sector Bears Brunt of Capital Taxes, Random Regulation by Dan Gelernter

According to our president’s final State of the Union, we’ve recovered from the economic crisis and now enjoy the strongest, most durable economy in the world. Obama does acknowledge that startups and small businesses may need some help, so he wants to reignite our “sprit of innovation” — which he plans to do by putting Vice President Biden in charge of curing cancer.

But the problem facing startups is not a lack of innovation. We are being killed by the economy, which, for those of us who have to live in it, is not good at all. Young entrepreneurs may have spent last year working hard, innovating and building, only to find their companies are worth less now than when they started.

The market is adjusting downwards. Valuations are sinking. The investors I’ve spoken to feel the Fed’s free-money policy has created a dangerous over-valuation of companies and stocks and, now that the rates are coming back up, the air is being let out. 2015, they say, was a tough year because we knew this was coming. 2016 is going to be even tougher.

There is something else weighing on the minds of entrepreneurs and investors alike — regulatory uncertainty. No startup can deal with compliance by itself — not even software companies with no physical products to sell. Startups have to hire lawyers and compliance experts to help them, and this is money we’re not spending on product development or marketing or making our prices more competitive.

The way Obamacare is being implemented, for example, makes our hair white. The rules seem to change with bureaucratic whim; various parts of the law are suspended by executive order. How will we comply next year, and what will it cost? Nobody knows.

In the meantime, the Democratic candidates for President are proposing large hikes to the capital gains tax, which increases effective risk for investors and depresses valuations. Will these hikes ever take place? We don’t know, and that uncertainty carries an additional price.

We’re already seeing more investors decide to weather the storm on the sidelines, keeping an eye on their current affairs and declining to invest in companies they would have snapped up a year ago. A tech startup with a working product will find it harder to raise money today than it would have two years before with nothing but a concept. Not only are we faced with a weak market now, the trend is even more disturbing.

The problem is easier to diagnose than to repair. As an entrepreneur, I’d like to see less regulation and lower taxes. And not just lower taxes on the companies themselves, but on the people who can afford to invest in them. This may come as a surprise, but it’s the hated “one percent” that invests in startups and helps entrepreneurs’ dreams come true. When taxes cut deeper into the pockets of the wealthy, it most negatively affects us — the entrepreneurs and the people we would have hired — not the wealthy.

Regulation remains erratic, and the policies of the next administration cannot be foreseen. 2016 is going to be a hard year for the startup. Investments will continue to decline until investors see a stable market. And they’re not looking at one right now. Companies will die as a result, and not for lack of innovative ideas.

Dan Gelernter

Dan Gelernter is CEO of the technology startup Dittach.

The Rise and Fall of American Growth by Emily Skarbek

Diane Coyle has reviewed Robert Gordon’s new book (out late January), The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War.

Gordon’s central argument will be familiar to readers of his work. In his view, the main technological and productivity-enhancing innovations that drove American growth in the early to mid 20th century — electricity, internal combustion engine, running water, indoor toilets, communications, TV, chemicals, petroleum — could only happen once, have run their course, and the prospects of future growth look uninspiring. For Gordon, it is foreseeable that the rapid progress made over the past 250 years will turn out to be a unique episode in human history.

Coyle zeros in on the two main mechanisms to which Gordon attributes the slowing of growth. The first is that future innovation will be slower or its effects less important. Coyle finds this argument less convincing.

What I find odd about Gordon’s argument is his insistence that there is a kind of competition between the good old days of ‘great innovations’ and today’s innovations – which are necessarily different.

One issue is the extent to which he ignores all but a limited range of digital innovation; low carbon energy, automated vehicles, new materials such as graphene, gene-based medicine etc. don’t feature.

The book claims more recent innovations are occurring mainly in entertainment, communication and information technologies, and presents these as simply less important (while making great play of the importance of radio, telephone and TV earlier).

While I have yet to read the book, Gordon makes several similar arguments in an NBER working paper. There he gives a few examples of his view of more recent technological innovations as compared to the Great Inventions of the mid-20th century.

More familiar was the rapid development of the web and ecommerce after 1995, a process largely completed by 2005. Many one-time-only conversions occurred, for instance from card catalogues in wooden cabinets to flat screens in the world’s libraries and the replacement of punch-hole paper catalogues with flat-screen electronic ordering systems in the world’s auto dealers and wholesalers.

In other words, the benefits of the computer revolution were one time boosts, not lasting increases in labor productivity. Gordon then invokes Solow’s famous sentence that “we [could] see the computers everywhere except in the productivity statistics.” When the effects do show up, Gordon says, they fade out by 2004 and labor productivity flat lines.

Solow’s interpretation (~26 mins into the interview) of where the productivity gains went is different, and more consistent with Coyle’s deeper point. In short, the statistics themselves doesn’t capture the full gains from innovation:

And when that happened, it happened in an interesting way. It turned out when there were first clear indications, maybe 8 or 10 years later, of improvements in productivity on a national scale that could be traced to computers statistically, it turned out a large part of those gains came not in the use of the computer, but in the production of computers.

Because the cost of an item of computing machinery was falling like a stone, and the quality was at the same time, the capacity at the same time was improving. And people were buying a lot of computers, so this was not a trivial industry. …

You got big productivity gains in the production of computers and whatnot. But you could also begin to see productivity improvements on a national scale that traced to the use of computers.

Coyle’s central criticism is not just on the interpretation of the data, but on an interesting switch in Gordon’s argument:

Throughout the first two parts of the book, Gordon repeatedly explains why it is not possible to evaluate the impact of inventions through the GDP and price statistics, and therefore through the total factor productivity figures based on them — and then uses the real GDP figures to downplay modern innovation.”

Coyle’s understanding of the use and abuse of GDP figures leads her to the fundamental point:

While the very long run of real GDP figures (the “hockey stick of history”) does portray the explosion of living standards under market capitalism, one needs a much richer picture of the qualitative change brought about by innovation and variety.

This must include the social consequences too — and the book touches on these, from the rise of the suburbs to the transformation of the social lives of women.

To understand Coyle’s insights more deeply, her discussion with Russ Roberts gives a fascinating discussion of GDP (no, really!).

In my view, it seems to come down to differing views about where Moore’s Law is taking us. The exponentially increasing computational power — with increasing product quality at decreasing prices — has never happened at such a sustained pace before.

The technological Great Inventions that Gordon sees as fundamental to driving sustained growth of the past all were bursts of innovation followed by a substantial time period where entrepreneurs figured out how to effectively commodify and deliver that technology to the broader economy and society. What is so interesting about the pattern of exponential technological progress is that price/performance gains have not slowed, even as some bits of these gains have just shown signs of commodification — Uber, 3D printing, biosynthesis of living tissue, etc.

There are good reasons to think that in the past we have failed to capture all the gains from innovation in measures of total factor productivity and labor productivity, as Gordon rightly points out. But if this is true, it seems strange to me to look at the current patterns of technological progress and not see the potential for these innovations to lead to sustained growth and increases in human well-being.

This is, of course, conditional on the political economy in which innovation takes place. The second cause for low future growth for Gordon concerns headwinds slowing down whatever innovation-driven growth there might be. Here I look forward to reading the relative weights Gordon assigns to factors such as demography, education, inequality, globalization, energy/environment, and consumer and government debt. In particular, I hope to read Gordon’s own take (and others) on how the political economy environment could change the magnitude or sign of these headwinds.

The review is worth a read in advance of what will likely prove to be an important book in the debate on development and growth.

This post first appeared at Econlog, where Emily is a new guest blogger.

Emily SkarbekEmily Skarbek

Emily Skarbek is Lecturer in Political Economy at King’s College London and guest blogs on EconLog. Her website is EmilySkarbek.com. Follow her on Twitter @EmilySkarbek.​

Everyone Is Talking about Bitcoin by Jeffrey A. Tucker

I’m getting a flurry of messages: how do I buy Bitcoin? What’s the best article explaining this stuff? How to answer the critics? (Might try here, here, here, and here.)

Markets can be unpredictable. But the way people talk about markets is all too predictable.

When financial assets go up in price, they become the topic of conversation. When they go way up in price, people feel an itch to buy. When they soar to the moon, people jump in the markets — and ride the price all the way back down.

Then while the assets are out of the news, they disappear from the business pages and only the savviest investors buy. Then they ride the wave up.

This is why smart money wins and dumb money loses.

Bitcoin Bubbles and Busts

It’s been this way for seven years with Bitcoin. When the dollar exchange rate is falling, people get bored or even disgusted. When it is rising, people get interested and excited. The challenge of Bitcoin is to see through the waves of hysteria and despair to take a longer view.

In the end, Bitcoin is not really about the dollar exchange rate. It is about its use as a technology. If Bitcoin were only worth a fraction of a penny, the concept would already be proven. It demonstrates that money can be a digital product, created not by government or central banks but rather through the same kind of ingenuity that has already transformed the world since the advent of the digital age.

When the Bitcoin white paper came out in October 2008, only a few were interested. Five years would pass before discussion of the idea even approached the mainstream. Now we see the world’s largest and most heavily capitalized banks, payment processing companies, and venture capitalists working to incorporate Bitcoin’s distributed ledger into their operations.

In between then and now, we’ve seen wild swings of opinion among the chattering classes. When Bitcoin hit $30 in February 2013, people were screaming that it was a Ponzi-like bubble destined to collapse. I’ve yet to see a single mea culpa post from any of these radical skeptics. It’s interesting how the incessantly wrong slink away, making as little noise as possible.

For the last year, the exchange rate hovered around $250, but because this was down from its high, people lost interest. What is considered low and what is considered high are based not on fundamentals but on the direction of change.

What Is the Right BTC Price?

The recent history of cryptocurrency should have taught this lesson: No one knows the right exchange rate for Bitcoin. That is something to be discovered in the course of market trading. There is no final answer. The progress of technology and the shaping of economic value knows no end.

On its seventh birthday, Bitcoin broke from its hiatus and has spiked to over $350, on its way to $400. And so, of course, it is back in the news. Everyone wants to know the source of the last price run up. There is speculation that it is being driven by demand from China, where bad economic news keeps rolling in. There has also been a new wave of funding for Bitcoin enterprises, plus an awesome cover story in the Economist magazine.

Whatever the reason, this much is increasingly clear: Bitcoin is perhaps the most promising innovation of our lifetimes, one that points to a future of commodified, immutable, and universal information exchange. It could not only revolutionize contracting and titling. It could become a global currency that operates outside the nation state and banking structures as we’ve known them for 500 years. It could break the model of money monopolization that has been in operation for thousands of years.

Technology in Fits and Starts

Those of us in the Bitcoin space, aware of the sheer awesomeness of the technology, can grow impatient, waiting for history to catch up to technical reality. We are daily reminded that technology does not descend on the world on a cloud in its perfected form, ready for use by the consuming public. It arrives in fits and starts, is subjected to trials and improvement, and its applications tested against real world conditions. It passes from hand to hand in succession, with unpredictable winners and losers.

Successful technology does not become socially useful in the laboratory. Market experience combined with entrepreneurial risk are the means by which ideas come to make a difference in the world at large.

Bitcoin was not created in the monetary labs of the Federal Reserve or banks or universities. It emerged from a world of cypherpunks posting on private email lists — people not even using their own names.

In that sense, Bitcoin had every disadvantage: No funding, no status, no official endorsements, no big-name boosters. It has faced an ongoing flogging by bigshots. It’s been regulated and suppressed by governments. It’s been hammered constantly by scammers, laughed at by experts, and denounced by moralists for seven straight years.

And yet, even given all of this, it has persisted solely on its own merits. It is the ultimate “antifragile” technology, growing stronger in the face of every challenge.

What will be the main source of Bitcoin’s breakout into the mainstream? Commentary trends suggest it will be international remittances. It is incredible that moving money across national borders is as difficult and expensive as it is. With Bitcoin, you remove almost all time delays and transaction costs. So it is not surprising that this is a huge potential growth area for Bitcoin.

The Economist takes a different direction. It speculates that Bitcoin technology will be mostly useful as a record-keeping device. It is “a machine for creating trust.”

One idea, for example, is to make cheap, tamper-proof public databases — land registries, say, (Honduras and Greece are interested); or registers of the ownership of luxury goods or works of art. Documents can be notarised by embedding information about them into a public blockchain — and you will no longer need a notary to vouch for them.

Financial-services firms are contemplating using blockchains as a record of who owns what instead of having a series of internal ledgers. A trusted private ledger removes the need for reconciling each transaction with a counterparty, it is fast and it minimises errors.

We Need Bitcoin 

No one knows for sure. What we do know is that we desperately need this as a tool to disintermediate the world, liberating us from the governments that have come to stand between individuals and the realization of their dreams.

In 1974, F.A. Hayek dreamed of a global currency that operated outside governments and central banks. If governments aren’t going to reform money, markets would need to step up and do it themselves. Bitcoin is the most successful experiment in this direction we’ve yet seen.

And that is true whether or not your friends and neighbors are talking about it.

Jeffrey A. Tucker

Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.

New York’s Taxi Cartel Is Collapsing — Now They Want a Bailout! by Jeffrey A. Tucker

An age-old rap against free markets is that they give rise to monopolies that use their power to exploit consumers, crush upstarts, and stifle innovation. It was this perception that led to “trust busting” a century ago, and continues to drive the monopoly-hunting policy at the Federal Trade Commission and the Justice Department.

But if you look around at the real world, you find something different. The actually existing monopolies that do these bad things are created not by markets but by government policy. Think of sectors like education, mail, courts, money, or municipal taxis, and you find a reality that is the opposite of the caricature: public policy creates monopolies while markets bust them.

For generations, economists and some political figures have been trying to bring competition to these sectors, but with limited success. The case of taxis makes the point. There is no way to justify the policies that keep these cartels protected. And yet they persist — or, at least, they have persisted until very recently.

In New York, we are seeing a collapse as inexorable as the fall of the Soviet Union itself. The app economy introduced competition in a surreptitious way. It invited people to sign up to drive people here and there and get paid for it. No more standing in lines on corners or being forced to split fares. You can stay in the coffee shop until you are notified that your car is there.

In less than one year, we’ve seen the astonishing effects. Not only has the price of taxi medallions fallen dramatically from a peak of $1 million, it’s not even clear that there is a market remaining at all for these permits. There hasn’t been a single medallion sale in four months. They are on the verge of becoming scrap metal or collector’s items destined for eBay.

What economists, politicians, lobbyists, writers, and agitators failed to accomplished for many decades, a clever innovation has achieved in just a few years of pushing. No one on the planet could have predicted this collapse just five years ago. Now it is a living fact.

Reason TV does a fantastic job and covering what’s going on with taxis in New York. Now if this model can be applied to all other government-created monopolies, we might see genuine progress toward a truly competitive economy. After all, it turns out that the free market is the best anti-monopoly weapon ever developed.

Jeffrey A. Tucker
Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.

Video Game Developers Face the Final Boss: The FDA by Aaron Tao

As I drove to work the other day, I heard a very interesting segment on NPR that featured a startup designing video games to improve cognitive skills and relieve symptoms associated with a myriad of mental health conditions.

One game, Project Evo, has shown good preliminary results in training players to ignore distractions and stay focused on the task at hand:

“We’ve been through eight or nine completed clinical trials, in all cognitive disorders: ADHD, autism, depression,” says Matt Omernick, executive creative director at Akili, the Northern California startup that’s developing the game.

Omernick worked at Lucas Arts for years, making Star Wars games, where players attack their enemies with light sabers. Now, he’s working on Project Evo. It’s a total switch in mission, from dreaming up best-sellers for the commercial market to designing games to treat mental health conditions.

“The qualities of a good video game, things that hook you, what makes the brain — snap — engage and go, could be a perfect vessel for actually delivering medicine,” he says.

In fact, the creators believe their game will be so effective it might one day reduce or replace the drugs kids take for ADHD.

This all sounds very promising.

In recent years, many observers (myself included) have expressed deep concerns that we are living in the “medication generation,” as defined by the rapidly increasing numbers of young people (which seems to have extended to toddlers and infants!) taking psychotropic drugs.

As experts and laypersons continue to debate the long-term effects of these substances, the news of intrepid entrepreneurs creating non-pharmaceutical alternatives to treat mental health problems is definitely a welcome development.

But a formidable final boss stands in the way:

[B]efore they can deliver their game to players, they first have to go through the Food and Drug Administration — the FDA.

The NPR story goes on to detail on how navigating the FDA’s bureaucratic labyrinth is akin to the long-grinding campaign required to clear the final dungeon from any Legend of Zelda game. Pharmaceutical companies are intimately familiar with the FDA’s slow and expensive approval process for new drugs, and for this reason, it should come as no surprise that Silicon Valley companies do their best to avoid government regulation. One venture capitalist goes so far as to say, “If it says ‘FDA approval needed’ in the business plan, I myself scream in fear and run away.”

Dynamic, nimble startups are much more in tune with market conditions than the ever-growing regulatory behemoth that is defined by procedure, conformity, and irresponsibility. As a result, conflict between these two worlds is inevitable:

Most startups can bring a new video game to market in six months. Going through the FDA approval process for medical devices could take three or four years — and cost millions of dollars.

In the tech world, where app updates and software patches are part of every company’s daily routine just to keep up with consumer habits, technology can become outdated in the blink of an eye. Regulatory hold on a product can spell a death sentence for any startup seeking to stay ahead of its fierce market competition.

Akili is the latest victim to get caught in the tendrils of the administrative state, and worst of all, in the FDA, which distinguished political economist Robert Higgs has described as “one of the most powerful of federal regulatory agencies, if not the most powerful.” The agency’s awesome authority extends to over twenty-five percent of all consumer goods in the United States and thus “routinely makes decisions that seal the fates of millions.”

Despite its perceived image as the nation’s benevolent guardian of health and well-being, the FDA’s actual track record is anything but, and its failures have been extensively documented in a vast economic literature.

The “knowledge problem” has foiled the whims of central planners and social engineers in every setting, and the FDA is not immune. By taking a one-sized-fits-all approach in enacting regulatory policy, it fails to take into account the individual preferences, social circumstances, and physiological attributes of the people that compose a diverse society.

For example, people vary widely in their responses to drugs, depending on variables that range from dosage to genetic makeup. In a field as complex as human health, an institution forcing its way on a population is bound to cause problems (for a particularly egregious example, see what happened with the field of nutrition).

The thalidomide tragedy of the 1960s is usually cited as to why we need a centralized, regulatory agency staffed by altruistic public servants to keep the market from being flooded by toxins, snake oils, and other harmful substances. However, this needs to be weighed against the costs of keeping beneficial products withheld.

For example, the FDA’s delay of beta blockers, which were widely available in Europe to reduce heart attacks, was estimated to have cost tens of thousands of lives. Despite this infamous episode and other repeated failures, the agency cannot overcome the institutional incentives it faces as a government bureaucracy. These factors strongly skew its officials towards avoiding risk and getting blamed for visible harm. Here’s how the late Milton Friedman summarized the dilemma with his usual wit and eloquence:

Put yourself in the position of a FDA bureaucrat considering whether to approve a new, proposed drug. There are two kinds of mistakes you can make from the point of view of the public interest. You can make the mistake of approving a drug that turns out to have very harmful side effects. That’s one mistake. That will harm the public. Or you can make the mistake of not approving a drug that would have very beneficial effects. That’s also harmful to the public.

If you’re such a bureaucrat, what’s going to be the effect on you of those two mistakes? If you make a mistake and approve a product that has harmful side effects, you are a devil incarnate. Your misdeed will be spread on the front page of every newspaper. Your name will be mud. You will get the blame. If you fail to approve a drug that might save lives, the people who would object to that are mostly going to be dead. You’re not going to hear from them.

Critics of America’s dysfunctional healthcare system have pointed out the significant role of third-party spending in driving up prices, and how federal and state regulations have created perverse incentives and suppressed the functioning of normal market forces.

In regard to government restrictions on the supply of medical goods, the FDA deserves special blame for driving up the costs of drugsslowing innovation, and denying treatment to the terminally ill while demonstrating no competency in product safety.

Going back to the NPR story, a Pfizer representative was quoted in saying that “game designers should go through the same FDA tests and trials as drug manufacturers.”

Those familiar with the well-known phenomenon of regulatory capture and the basics of public choice theory should not be surprised by this attitude. Existing industries, with their legions of lobbyists, come to dominate the regulatory apparatus and learn to manipulate the system to their advantage, at the expense of new entrants.

Akili and other startups hoping to challenge the status quo would have to run past the gauntlet set up by the “complex leviathan of interdependent cartels” that makes up the American healthcare system. I can only wish them the best, and hope Schumpeterian creative destruction eventually sweeps the whole field of medicine.

Abolishing the FDA and eliminating its too-often abused power to withhold innovative medical treatments from patients and providers would be one step toward genuine healthcare reform.

A version of this post first appeared at The Beacon.

Aaron Tao
Aaron Tao

Aaron Tao is the Marketing Coordinator and Assistant Editor of The Beacon at the Independent Institute. Follow him on Twitter here.

Environmental Doom-mongering and the Myth of Vanishing Resources by Chelsea German

Media outlets ranging from Newsweek and Time, to National Geographic and even the Weather Channel, all recently ran articles on the so-called “Overshoot Day,” which is defined by its official website as the day of the year

When humanity’s annual demand for the goods and services that our land and seas can provide — fruits and vegetables, meat, fish, wood, cotton for clothing, and carbon dioxide absorption — exceeds what Earth’s ecosystems can renew in a year.

This year, the world allegedly reached the Overshoot Day on August 13th. Overshoot Day’s proponents claim that, having used up our ecological “budget” for the year and entered into “deficit spending,” all consumption after August 13th is unsustainable.

Let’s look at the data concerning resources that, according to Overshoot Day’s definition, we are consuming unsustainably. (We’ll leave aside carbon dioxide absorption — as that issue is more complex — and focus on all the other resources).

Fruits and vegetables

Since millions of people rose from extreme poverty and starvation over the past few decades, the world is consuming more fruits and vegetables than before. We are also producing more fruits and vegetables per person than before. That is, partly, because of increasing yields, which allow us to extract more food from less land. Consider vegetable yields:

Meat and fish

As people in developing countries grow richer, they consume more protein (i.e., meat). The supply of meat and fish per person is rising to meet the increased demand, just as with fruits and vegetables. Overall dietary supply adequacy is, therefore, increasing.

Wood

It is true that the world is losing forest area, but there is cause for optimism. The United States has more forest area today than it did in 1990.

As Ronald Bailey says in his new book The End of Doom, “In fact, except in the cases of India and Brazil, globally the forests of the world have increased by about 2 percent since 1990.”

As the people of India and Brazil grow wealthier and as new forest-sparing technologies spread, those countries will likely follow suit. To quote Jesse H. Ausubel:

Fortunately, the twentieth century witnessed the start of a “Great Restoration” of the world’s forests. Efficient farmers and foresters are learning to spare forestland by growing more food and fiber in ever-smaller areas. Meanwhile, increased use of metals, plastics, and electricity has eased the need for timber. And recycling has cut the amount of virgin wood pulped into paper.

Although the size and wealth of the human population has shot up, the area of farm and forestland that must be dedicated to feed, heat, and house this population is shrinking. Slowly, trees can return to the liberated land.

Cotton

Cotton yields are also increasing — as is the case with so many other crops. Not only does this mean that we will not “run out” of cotton (as the Overshoot Day proponents might have you believe), but it also means consumers can buy cheaper clothing.

Please consider the graph below, showing U.S. cotton yields rising and cotton prices falling.

While it is true that humankind is consuming more, innovations such as GMOs and synthetic fertilizers are also allowing us to produce more. Predictions of natural resource depletion are not new.

Consider the famous bet between the environmentalist Paul Ehrlich and economist Julian Simon: Ehrlich bet that the prices of five essential metals would rise as the metals became scarcer, exhausted by the needs of a growing population. Simon bet that human ingenuity would rise to the challenge of growing demand, and that the metals would decrease in price over time. Simon and human ingenuity won in the end. (Later, the prices of many metals and minerals did increase, as rapidly developing countries drove up demand, but those prices are starting to come back down again).

To date, humankind has never exhausted a single natural resource. To learn more about why predictions of doom are often exaggerated, consider watching Cato’s recent book forum, The End of Doom.

A version of this post first appeared at Cato.org.

Chelsea German

Chelsea German

Chelsea German works at the Cato Institute as a Researcher and Managing Editor of HumanProgress.org.

RELATED ARTICLE: EPA’s Hightest Paid Employee, “Climate Change Expert,” Sentenced to 32 Months for Fraud, Says Lying Was a ‘Rush’

The War on Air Conditioning Heats Up: Is Climate Control Immoral? by Sarah Skwire

It started with the pope. In his recent encyclical, Laudato Si’, he singled out air conditioning as a particularly good example of wasteful habits and excessive consumption that overcome our better natures:

People may well have a growing ecological sensitivity but it has not succeeded in changing their harmful habits of consumption which, rather than decreasing, appear to be growing all the more. A simple example is the increasing use and power of air-conditioning.

Now, it seems to be open season on air conditioning. From a raging Facebook debate over an article that claims that air conditioning is an oppressive tool of the patriarchy to an article in the Washington Post that calls the American use of air conditioning an “addiction” and compares it unfavorably to the European willingness to sweat through the heat of summer, air conditioning is under attack. So I want to defend it.

Understand that when I defend air conditioning, I do so as something of a reluctant proponent. I grew up in the Midwest, and I have always loved sitting on the screened-in porch, rocking on the porch swing, drinking a glass of something cold. I worked in Key West during the summer after my sophomore year of college, lived in an apartment with no air conditioning, and discovered the enormous value of ceiling fans. A lazy, hot summer day can be a real pleasure.

However, let’s not kid ourselves. There were frequent nights in my childhood when it was just too hot to sleep, and the entire family would hunker down in the one air-conditioned room of the house — my father’s attic study — to cool off at night. When we moved from that house to a place that had central air, none of us complained.

And after my recent article on home canning, my friend Kathryn wrote to say,

When I was growing up in the Deep South, everybody I knew had a garden, shelled beans and peas, and canned. It could have been an Olympic event. What I remember most — besides how good the food was — is how hot it was, all those hours spent over huge pots of boiling something or other on the stove in a house with no air conditioning.

There’s a lot to be said for being able to cook in comfort and to enjoy the screened-in porch by choice rather than necessity. Making your family more comfortable is one of the great advantages of an increasingly wealthy society, after all.

So when I read that the US Department of Energy says that you can save about 11 percent on your electric bill by raising the thermostat from 72 to 77 degrees, mostly I want to invite the Department of Energy to come over to my 1929 bungalow and see if they can get any sleep in my refinished attic bedroom when the thermostat is set to 77 degrees, but the room temperature is a cozy 80-something.

And when I read Petula Dvorak arguing that air conditioning is a tool of sexism because “all these women [are freezing] who actually dress for the season — linens, sundresses, flowy silk shirts, short-sleeve tops — changing their wardrobes to fit the sweltering temperatures around them. … And then there are the men, stalwart in their business armor, manipulating their environment for their own comfort, heaven forbid they make any adjustments in what they wear,” mostly I want to ask her if she’s read the dress codes for most professional offices. In my office, women can wear sleeveless tops and open-toed shoes in the summer. Men have to wear a jacket and tie. Air conditioning isn’t sexist. Modern dress codes very well might be.

But arguments based on nostalgia or gender are mostly easily dismissed. Moral arguments, like those made by Pope Francis or by those who are concerned about the environmental and energy impact of air conditioning, are more serious and require real attention.

Is it immoral to use air conditioning?

Pope Francis certainly suggests it is. And the article in the Washington Post that compares US and European air conditioning use agrees, suggesting that the United States prefers the short-term benefits of air conditioning over the long-term dangers of potential global warming — and that our air conditioning use “will make it harder for the US to ask other countries to continue to abstain from using it to save energy.” We are meant to be deeply concerned about the global environmental impact as countries like India, Indonesia, and Brazil become wealthy enough to afford widespread air conditioning. We are meant to set a good example.

But two months before the Washington Post worried that the United States has made it difficult to persuade India not to use air conditioning, 2,500 Indians died in one of the worst heat waves in the country’s history. This June, 780 people died in a four-day heat wave in Karachi, Pakistan. And in 2003, a heat wave that spanned Europe killed 70,000. Meanwhile, in the United States, heat causes an average of only 618 deaths per year, and the more than 5,000 North American deaths in the un-air-conditioned days of 1936 remain a grim outlier.

Air conditioning is not immoral. Possessing a technology that can prevent mortality numbers like these and not using it? That’s immoral.

Air conditioning is, for most of us, a small summertime luxury. For others, it is a life-saving necessity. I am sure that it has environmental effects. Benefits always have costs, and there’s no such thing as a free climate-controlled lunch. But rather than addressing those costs by trying to limit the use of air conditioning and by insisting that developing nations not use the technologies that rocketed the developed world to success, perhaps we should be focusing on innovating new kinds of air conditioning that can keep us cool at a lesser cost.

I bet the kids who will invent that technology have already been born. I pray that they do not die in a heat wave before they can share it with us.


Sarah Skwire

Sarah Skwire is a senior fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.

Clinton’s Startup Tax Will Crush New Businesses by Dan Gelernter

Hillary Clinton has announced that she will, if elected, raise the capital-gains tax to a maximum that equals the highest income tax bracket. She hopes to promote long-term investments by penalizing short-term ones with a tax rate that gets lower the longer an investment is held, reaching the current 20% rate only after six years.

This, Ms. Clinton says, would allow a CEO to focus on the company’s true interests rather than just making the next quarter. It is, unfortunately, exactly the sort of plan you would expect from someone who has never started a company — and who doesn’t seem to know anyone who has.

The CEO of a startup is unlike the CEO of an established business. He is not the head of a chain of command: he is the spokesman or agent of a few colleagues, entrusted for the moment to represent them. The startup CEO has one primary job, which is raising money. It is the hardest thing a young company has to do — and it is an unending process.

Most germinal startups never raise any money at all. The ones that get seed funding are already breathing rarified air, and can afford perhaps a day of celebration before they start pursuing the next round.

The picture is especially tough for tech startups. A startup that builds software doesn’t have any machinery or physical supplies to auction off if the company fails. This means that banks won’t make the kind of secured business loans of the sort small companies traditionally get.

As a result, tech startups are wholly reliant on a relatively small number of investors who are looking for something more exciting than the establishment choices and are willing to take a big gamble in the hope of a big, short-term payoff. Though Ms. Clinton’s proposal would only affect those in the top income bracket, she may be surprised to learn that those are the only people who can afford to make such investments.

Professional investors think in terms of risk: they balance the likelihood of a startup’s failure against the potential payoff of its success. Increasing the tax rate reduces the effective payoff, which increases risk. Investors can lower that risk by reducing the valuation at which they are willing to invest, which means they take a larger share of the company — a straightforward transfer of risk from investors to entrepreneurs.

Ms. Clinton’s tax therefore will not be borne by wealthy investors: it comes out of the entrepreneur’s payday. The increased tax rate means a risk-equivalent decrease in the percentage of the company the entrepreneur gets to keep. And that’s just the best-case scenario.

The other option is that the tax doesn’t get paid at all, because the investor decides the increased risk isn’t worth it — the startup can’t attract funding and dies.

That sounds melodramatic, but it is no exaggeration. A startup company never has more offers than it needs; it never raises money with time spare. Even a slight change in the risk-return balance — say, the 3.8% which Obamacare quietly laid on top of the current capital-gains — kills companies, as investors and entrepreneurs see the potential upside finally shaved past the tipping point.

A tech startup has short-term potential. That is a major part of the attraction to investors, and that makes Ms. Clinton’s proposal especially damaging. In the tech world, we all hope we’ll be the next Facebook or Twitter, but you can’t pitch that to an investor. A good tech startup takes a small, simple idea and implements it beautifully.

The most direct success scenario is an acquisition by a larger company. In the app world — and this is the upside to not having physical limitations on distribution — the timescale is remarkably accelerated. A recent benchmark example was Mailbox, purchased by Dropbox just two months after it launched.

Giving investors an incentive to not to sell will hurt entrepreneurs yet again, postponing the day their sweat equity finally has tangible value, and encouraging decisions that make tax-sense rather than business-sense.

If Hillary Clinton really wants to help entrepreneurs, she should talk to some and find out what they actually want. A lower capital-gains tax — or no capital-gains tax — would be an excellent start.

Dan Gelernter

Dan Gelernter is CEO of the technology startup Dittach.

Who Is Doing More for Affordable Education: Politicians or Innovators? by Bryan Jinks

With a current outstanding student loan debt of $1.3 trillion, debt-free education is poised to be a major issue leading up to the 2016 presidential election.

Presidential candidate Bernie Sanders has come forth with his plan for tuition-free higher education.

Senator Elizabeth Warren supports debt-free education, which goes even further by guaranteeing that students don’t take on debt to pay other expenses incurred while receiving an education.

Democratic Party front-runner Hillary Clinton is expected to propose a plan to reduce student loan debt at some point. And don’t forget President Obama’s proposal to provide two years of community college to all students tuition-free.

While all of these plans would certainly increase access to higher education, they would also be expensive. President Obama’s relatively modest community college plan would cost $60 billion over the next decade. What makes this an even worse idea is that all of that taxpayer money wouldn’t solve the most important problems currently facing higher education.

Shifting the costs completely to taxpayers doesn’t actually reduce the costs. It also doesn’t increase the quality of education in a system that has high drop-out rates and where a lot of graduates end up in low-paying jobs that don’t use their degree. Among first-time college students who enrolled in a community college in the fall of 2008, fewer than 40% earned a credential from either a two-year or four-year institution within six years.

Whatever the other social or spiritual benefits of attending college are, they don’t justify wasting that so much time and money without seeing much improvement in wages or job prospects.

Proponents of debt-free college argue that these programs are worth the cost because a more educated workforce will boost the economy. But these programs would push more marginal students into college without any regard for how prepared they are, how likely they are to graduate, or how interested they are in getting a degree. If even more of these students enter college, keeping the low completion rates from falling even further would be a challenge.

All of these plans would just make sure that everyone would have access to the mediocre product that higher education currently is. Just as the purpose of Obamacare was to make sure that every American had a health insurance card in their wallet, the purpose of debt-free education is to make sure that every American has a student ID card too — whether it means anything or not.

But there are changes coming in higher education that can actually solve some of these problems.

The Internet is making education much cheaper. While Open Online Courses have existed for more than a decade, there are a growing number of places to find educational materials online. Udemy is an online marketplace that allows anyone to create their own course and sell it or give it away. Saylor Academy and University of the People both have online models that offer college credit with free tuition and relatively low examination fees.

Udacity offers nanodegrees that can be completed in 6-12 months. The online curriculum is made in partnership with technology companies to give students exactly the skills that hiring managers are looking for. And there are many more businesses and non-profits offering new ways to learn that are cheaper, faster, and more able to keep up with the ever-changing economy than traditional universities.

All of these innovations are happening in response the rising costs and poor outcomes that have become typical of formal education. New educational models will keep developing that offer solutions that policy makers can’t provide.

Some of these options are free, some aren’t. Each has their own curriculum and some provide more tangible credentials than others. There isn’t one definitive answer as to how someone should go about receiving an education. But each of these innovations provides a small part of the answer to the current problems with higher education.

Change for the better is coming to higher education. Just don’t expect it to come from Washington.

Bryan Jinks

Bryan Jinks is a ?freelance writer based out of Cleveland, Ohio.

Is Politics Obsolete? How People Outpace Politicians by Max Borders and Jeffrey A. Tucker

Hillary Clinton talks of cracking down on the gig economy. Donald Trump speaks of telling American corporations where they can and can’t do business abroad. Bernie Sanders says we have too many deodorant choices. They all speak about immigrants as if it were 1863.

What the heck are these people talking about?

More and more, that’s the response many people have to the current-day political speeches and rhetoric. It’s a hotly contested election, somewhat like 2008, but this time around, public engagement is low, reports Pew.

That’s no surprise, really. Whether it’s the leftists, the rightists, or everyone in between, all of these politicians seem to be blathering about a world gone by — one that has little to do with the 21st century. If they’re not tapping into people’s baser instincts of fear and nativism, they’re dusting off 20th-century talking points about creating “good jobs.”

Maybe there was a time when the political culture seemed to keep up with the pace of innovation. If so, those times are long gone. The rhetoric of electoral politics is exposing the great rift in civic life.

The tools we use every day, the technologies we love, the way we engage each other, the means by which our lives are improving are a consequences of innovation, markets, community, and globalization — that is, by the interactions of free people. Not by politics. And not by the systems politics creates.

The political election is a tired old ritual in which we send our hopes and dreams away to distant capitals. Why do we outsource them to politicians, lobbyists, and bureaucrats: people who are trapped in a system that rewards the worst in people? What’s left of governance is logrolling, spectacle, and unwanted interference in the lives of everyone else.

Politicians seem more concerned with putting the genie of innovation and entrepreneurship back in the bottle than doing anything meaningful. After the election, we try our best to ignore them and get on with life.

Politicians seem more concerned with putting the genie of innovation and entrepreneurship back in the bottle than doing anything meaningful.

In 2012, US voters reelected Barack Obama, and now we’re gearing up to elect someone else. Candidates will talk about their visions and their wonderful plans for the country. But in the last three years, virtually none of the incredible, beautiful upheaval we’ve seen has had anything to do with the presidency or with anyone politician’s plans.

In fact, when you think about what government has done for us in recent years, only one new program comes to mind: Obamacare. Opinions vary on whether that program has been deeply disappointing or an unmitigated disaster.

Now, take a step back and observe the evolution of commercial society and how it is bringing us unprecedented bounty. The digital sector of emergent, market-generated, people-driven, technology-fueled innovation is fulfilling human aspirations and spreading useful services to people in all walks of life. National borders seem ever more arbitrary. Surprises await us around every corner. Our political systems can claim credit for none of it.

And yet, we are once again being asked to turn to politicians to drive progress.

Consider how much our lives and technologies have changed since the last presidential election. Smartphone ownership has gone from 300 million to 2 billion, meaning that most of the population of the developed world — and large parts of the rest — now have access to a wireless supercomputer in their pockets. As a result, we are more in touch than ever.

There are now dozens of ways for anyone to keep in contact with anyone else through text messaging and video, and most of the services are free. Transportation in cities has fundamentally changed due to ridesharing and app-based systems that are outcompeting municipal taxis. Traditional travel lodging has been disrupted through mobile applications that turn every empty room into a hotel, and finding permanent lodging is easier than ever. You can find the ratings for any service or establishment instantly with a click or a tap, long before you purchase. You can feasibly shop for and buy a house without ever having stepped inside of it.

Cryptocurrency is becoming a viable alternative to national monies, and payment systems on distributed networks are being customized for peer-to-peer exchanges of property titles.

The mass distribution and availability of mobile applications with maps means that you are never lost, and, moreover, that you can be intensely aware of everything around you, wherever you are or wherever you are planning to be. Extended families that are spread out over large geographic regions can stay constantly in touch, chatting and playing games.

The way we help our neighbors and communities is improving. We can contribute to charitable causes with just a click. We are closer to our neighbors and their needs — whether it’s a missing cat, a call for a handyman, or childcare for Saturday night. We can be on the lookout after a break-in and share video of the perpetrators instantly.

The way we consume music has fundamentally changed. We once bought CDs. Then we downloaded particular tracks and albums. With Internet everywhere, we now stream a seemingly endless variety of genres. The switch between classical and indie rock requires only a touch. And it’s not just new music we can access, but vast archives and recreations of music dating to antiquity. Instantly.

Software packages that once cost thousands are now low-cost downloadable apps. Many of us live in the cloud now, so that no one’s life is ruined by a computer crash. Lost hardware can be found with built-in tracers — even stealing computers is harder than ever.

Where we work no longer matters as much. 4G LTE means a powerful Internet connection wherever you are, and WiFi on airlines means staying in touch even while above the clouds. Online document signing means total portability and the end of the physical world for most business transactions. You can share almost anything — whether grocery lists or whole writing projects — with anyone and work in real time. More people than ever work from home because they can.

News is now crowdsourced through Twitter and Facebook — or through mostly silly sites like BuzzFeed. There are thousands of competitors, so that we can know what we want to know wherever we are. Once there was only “national news”; now a news event has to be pretty epic to qualify, and much of the news that we are interested in never even makes old-line newspapers.

Edward Snowden revealed ubiquitous surveillance, escaped prosecution, and now, thanks to technology, has been on a worldwide speaking tour, becoming the globe’s most famous public intellectual. This is despite his having been censored and effectively exiled by the world’s biggest and most powerful state. He has a great story to tell, and that story is more powerful than any of the big shots who want him to shut up.

Pot has been effectively legalized in many American cities, and the temperature on the war against it has dropped dramatically. When dispensaries are raided, the news flies all over the Internet within minutes, creating outrage and bringing the heat down on the one-time masters of the universe. There is now a political risk to participating in the war on pot — something unthinkable even 10 years ago. And as police continue to abuse their power, citizens are waiting with cameras.

Oil prices have collapsed, revealing the fallacy of peak oil. This happened despite pressure in the opposite direction from every special interest, from environmentalists to the oil industry itself. The reason was again technological. We discovered better and cheaper ways of drilling, and, in so doing, exposed vastly more resources than anyone thought accessible.

At the very time when oil and gas seemed untouchable, we suddenly saw electric cars becoming viable options. This was not due to government mandates — regulators tried those for years — but due to some serious innovation on the part of one remarkable company. It’s not even the subsidies, such as they are, that are making the difference; it’s the fine-tuning of the machine itself. Tesla even took it a step further and released its patents into the commons, allowing innovation to spread at a market-based pace.

We are now printing houses in one day, vaping instead of smoking, legally purchasing pharmaceuticals abroad, using drones to deliver consumer products, and enjoying one-day delivery of just about everything.

In the last four years, the ebook became a mass consumer item, outselling the physical book and readable on devices within the budget of just about everyone. And despite attempts to keep books offline, just about anything is now available for download, putting all the world’s great literature, in all major languages, at our fingertips.

Here we go again, playing “let’s pretend” and electing leaders under the old-fashioned presumption that it is politics that improves the world and drives history forward.

And speaking of languages, we now have instant access to translation programs that allow us to email and even text with anyone in a way he or she can understand regardless of language. It’s an awesome thing to consider that this final barrier to universal harmony, once seen as insuperable, is in the process of melting away.

These are all ways in which the world has been improved through markets, creativity, and free association. And yet, here we go again, playing “let’s pretend” and electing leaders under the old-fashioned presumption that it is politics that improves the world and drives history forward.

Look around: progress is everywhere. And it is not because we are electing the “right people.” Progress occurs despite politics and politicians, not because of them.

Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World. Follow on Twitter and Like on Facebook.

The Politics of Nostalgia: Why Does the Left Want to Take Us Backwards? by Steven Horwitz

One of the more curious developments in the last couple of years has been left-wing nostalgia for the economy of the 1950s.

Don’t political progressives usually portray themselves as being on “the right side of history” — representing, as the term suggests, the march of “progress”?

Not when it comes to the economy.

Paul Krugman has written a number of columns over the last decade about how much better things were in the middle of the 20th century. More recently, we have presidential candidate Hillary Clinton making a major economic policy statement in which she longs for a time like the 1950s when workers had the structure of the corporate world and unions through which to lobby and negotiate for pay and benefits, rather than the so-called “gig” economy of so many modern freelance employees, such as Uber drivers. “This on-demand or so-called gig economy is creating exciting opportunities and unleashing innovation,” Clinton said, “but it’s also raising hard questions about workplace protection and what a good job will look like in the future.”

To protect Americans from the uncertain future, Clinton promised she would “crack down on bosses that exploit employees by misclassifying them as contractors or even steal their wages.”

In an economy where technology has enabled people to have a great deal more flexibility with their workdays and independence with their work choices, it’s now the “progressives” who are complaining about the economic organizations that have been agents of more efficient resource use, expanded choice for workers, and cheaper goods for consumers.

In short, the progressives are complaining about what would otherwise be called progress.

And let’s not let the conservatives off the hook here either, as they demonstrate their own nostalgia for an economy of the past, with cheers for Donald Trump’s anti-immigrant and anti-trade tirades and for his general love of dirigiste policies. Immigration and trade have also expanded the range of work available, lifted millions out of poverty through better-paying jobs in the United States, and enriched the rest of us through more affordable goods and services.

What’s particularly amusing about both sides, but especially the progressives, is how wrong they are about life for the average American being better back in the 1950s, including how much more secure they were. In a terrific paper for the Cato Institute, Brink Lindsey effectively demolished Krugman’s nostalgia with some actual data about the economy of the 1950s. He pointed out that the increase in income inequality since then noted by so many progressives is largely overstated, and that the economy they are nostalgic for is one that restricted competition in a variety of ways, mostly to the benefit of the politically influential. Limits on immigration and trade, in particular, prevented the 1950s economy from achieving the reductions in cost and increase in variety that we associate with our economy today.

Does anyone really want to go back to the stagnant, conformist, more poverty-stricken world of the 1950s?

It is more than a little ironic that modern progressives are nostalgic for the very economy that GOP front-runner Donald Trump would appear to want to create.

As I argued in a recent paper, when we look at the cost of living in terms of the work hours required to purchase basic household items, most goods and services are far cheaper today than in the 1950s. The equivalents of those items today are also of higher quality: think about the typical household TV or refrigerator in 1955 versus 2015. These substantial decreases in cost have had another effect. They have made these goods increasingly accessible to the poorest of Americans. American households below the poverty line are far more likely to have a whole variety of items in their homes than did poor families in the 1950s. In fact, they are more likely to have those things in their houses than was a middle-class American family in the 1970s.

When you also consider the number of goods that weren’t even available in the 1970s or 1950s, from technology like computers and smartphones, to innovative medicines and medical procedures, to various forms of entertainment, to a whole number of inventions that have made us safer, healthier, and longer-lived, it’s difficult to argue that things were better “back then.”

The effect of all of this change driven by increased competition is that our world is one in which the middle class and poor are better off, and the gap between poor and rich as measured by what they consume has narrowed substantially. Does anyone really want to go back to the stagnant, conformist, more poverty-stricken world of the 1950s?

Politicians do. And here’s one reason why: back then, it was easier to influence and control people’s economic lives. Progressives with a desire to shape their ideal economy aren’t happy with the world of freelancers, Uber, and independent contractors.

The economy of the 1950s and 1970s had organizational focal points where politicians could exercise leverage and thereby influence the lives of large numbers of citizens.

I’m thinking here of the auto companies in the 1950s, the oil companies in the 1970s, and any number of industries where large firms were created by restrictions on domestic and foreign competition, which were easy points of contact for politicians with a desire to control, and which had corporate leaders who were happy to reap the benefits of corporatism.

In a world of Uber, Airbnb, and all the rest, there are no central points of leverage. Facebook produces no content, Uber owns no cars, Alibaba owns no inventory. More important: Uber has no employees, only contractors. If you are Clinton or Trump, or even Krugman, there’s nowhere to go to exercise your power or to drum up support from workers in one place. There’s nothing to grab hold of. There are just people trading peacefully with each other, enriching everyone in the process.

The real irony, once again, is that what this decentralized economy has produced is more freedom and more flexibility for more workers. The same progressives who railed against the conformism of the 1950s a decade later are now nostalgic for what their predecessors rejected and are rejecting exactly the “do your own thing” ethos their 1960s heroes fought for.

The “gig” economy works for people who want options and who want flexible hours so they can pursue a calling the rest of the day. Or perhaps they want to spend a few hours a week driving an Uber because Obamacare caused their employers to cut their hours at their other job.

Whatever the reason, this economy offers the freedom and flexibility for workers, and the benefits for consumers, that represent the progress progressives should love. That progressives (and conservatives) with power are fighting against it tells you that they are much more concerned with power than with progress.

Nostalgia is a dangerous basis for making policy, whether left or right.


Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

Paul Krugman Is Clueless about Bitcoin by Max Borders

In this video clip, Paul Krugman demonstrates once again that prizes don’t make you an expert on everything. Indeed, his poor prognostications happen so frequently that one wonders if Krugman is an expert on anything. I don’t say that to be unpleasant. If you’re going on TV and enjoying a lavish lifestyle by pretending to know what you’re talking about, shouldn’t you be held to a higher standard?

Let’s pass over for a moment how woefully wrong Krugman was about the Internet. What about the internet of money?

Krugman first says: “At this point bitcoin is not looking too good.”

It is true that investment often follows the Gartner hype cycle. So bitcoin has indeed fallen from great heights and is probably just now making its ascent out of the “trough of disillusionment.”

But so what? There is nothing inherently wrong with bitcoin. In fact, some very savvy, patient people are building an unbelievable set of technologies within and around the blockchain. And if you believe Gartner, most really interesting tech goes through this cycle.

Let’s look back at the Internet. When the dotcom bubble and subsequent burst looked like this:

Do we conclude that because in 2002 the Internet wasn’t “looking so good” that TCP/IP was not viable? That would have been a very short-sighted thing to say, particularly about a system that is a robust “dumb network“ like the internet.

Bitcoin is also a dumb network. But don’t let the “dumb” part fool you, says bitcoin expert Andreas Andronopoulos. “So the dumb network becomes a platform for independent innovation, without permission, at the edge. The result is an incredible range of innovations, carried out at an even more incredible pace. People interested in even the tiniest of niche applications can create them on the edge.”

Then Krugman goes on to ask, “Why does a piece of paper with a dead president on it have value?” Answering his own question he says “Because other people think it has value.”

And this is not untrue. But the problem with this line of thinking is — subjective value notwithstanding — the value of money is also contingent. You might say the value of fiat money is too contingent — especially upon political whims, upon the limited knowledge of the folks at the Federal Reserve, and upon the fact that its unit of account is no longer anything scarce, such as gold.

By contrast, bitcoin has standard of scarcity programmed into it. So, bitcoin is in limited supply, thanks to a sophisticated algorithm.

In a fully decentralized monetary system, there is no central authority that regulates the monetary base. Instead, currency is created by the nodes of a peer-to-peer network. The bitcoin generation algorithm defines, in advance, how currency can be created and at what rate. Any currency that is generated by a malicious user that does not follow the rules will be rejected by the network and thus is worthless. (To learn more about this algorithm, visit “Currency with a Finite Supply.”)

Perhaps you don’t trust this algorithm. Certainly Paul Krugman does not. That’s okay, because digital currencies compete, so you can find one you do trust. One crypto currency is backed by gold and funnily enough, it’s called “the Hayek” after the Nobel laureate who wrote about competing private currencies.

Now, what shall we make of the magic of the dollar? Krugman says it is “the fact that you can use it to pay taxes.” That’s sort of like saying that the Internet works because of eFile. Let’s just assume Krugman was kidding.

But Krugman thinks, without irony, that bitcoin “levitates.” That is to say, he’s okay with the idea that the dollar has value because other people value it, but he’s not okay with the idea that bitcoin has value because other people value it, which is a rather curious thing to say in the same two-minute stretch. He goes on to argue that bitcoin is built on libertarian ideology, and that it doesn’t do anything that digitizing the dollar hasn’t done.

And that’s when we realize that Krugman doesn’t have any earthly clue about bitcoin.

But Freeman columnist Andreas Antonopoulos does:

Open-source currencies have another layer that multiplies these underlying effects: the currency itself. Not only is the investment in infrastructure and innovation shared by all, but the shared benefit may also manifest in increased value for the common currency.

Currency is the quintessential shared good, because its value correlates strongly to the economic activity that it enables. In simple terms, a currency is valuable because many people use it, and the more who use it, the more valuable it becomes.

Unlike national currencies, which are generally restricted to use within a country’s borders, digital currencies like bitcoin are global and can therefore be readily adopted and used by almost any user who is part of the networked global society.

What Krugman also fails to appreciate is that bitcoin and the bitcoin network is disintermediated. That’s a fancy way of saying it’s direct and peer-to-peer. This elimination of the mediating institutions — banks, governments, and credit card companies — means bitcoin transactions are far, far cheaper. But that also means these institutions could be far less powerful over time. And that’s precisely why it’s being adopted most quickly by the world’s poorest people and countries with hyperinflation.

Hey, look, I understand. In many ways, Krugman is a twentieth-century mind. Keynesian. Unhealthy obsession with aggregates and dirigisme. He believes in big central solutions to problems that robust, decentralized systems are far better equipped to tackle. And he’s not terribly plugged into tech innovation. In fact, here’s that well-played Internet quote in case you forgot:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law” — which states that the number of potential connections in a network is proportional to the square of the number of participants — becomes apparent: most people have nothing to say to each other!

By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

To grok the power decentralization, you have to have a twenty-first century mind.

Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

The Hidden Costs of Tenure by Jonathon Anomaly

Conversations I’ve had with non-academics about university employment practices usually evoke surprise and skepticism. Most people have a hard time understanding the point of a system that makes it so difficult to dismiss faculty members who are not especially good at their job.

The recent motion in Wisconsin to remove state laws that protect teacher tenure has re-ignited the debate over providing special protections to teachers—protections that don’t apply to journalists, gardeners, or bloggers who are occasionally fired for expressing unpopular views.

In some ways, regulations that determine how university professors are hired and fired in the United States are analogous to the restrictive labor laws in Spain and Greece. By raising the cost of firing bad workers, they increase the relative cost of hiring good ones.

The consequence is persistent unemployment and low productivity in Greece and Spain. The consequences of our tenure system are the proliferation of poor teaching and arcane research in university departments that are immunized from market forces.

Those who pursue a career as a university professor are mostly incentivized to produce specialized work aimed at impressing people who may end up on their promotion committee rather than a wider audience.

In the sciences, this may be a good thing, since one’s peers are likely doing narrow but important work that uncovers the basic structure of the universe. But in the humanities and social sciences, it often leads to the pursuit of bizarre research that is inscrutable to outsiders and of little value even to scholars in related fields.

Another hidden effect of the tenure system is that it often sifts out the very people it is supposed to protect: those with unusual or unpopular ideas. The original justification for tenure was to protect teachers and scholars who hold unpopular views by making it difficult to fire them. But when tenure is the main game in town, the stakes associated with hiring a new faculty member are high, making departments risk-averse. Thus, in order to be considered for tenure-track jobs, candidates have strong reasons to conceal unpopular political beliefs and to pursue relatively conservative lines of research.

By “conservative” I do not mean politically conservative. Quite the opposite.

If most people in a department where you’ve applied are progressives, it is not likely that your allegiance to any non-progressive views will help your cause. Tenured faculty members who make those decisions are often unwilling to take a chance on somebody with eccentric or politically unpopular views, since when a tenure-track position is filled, the candidate who fills it will probably be a colleague for life.

This is not only unfair; it is contrary to the mission of most universities. Research by Professor Jonathan Haidt suggests that political bias negatively impacts the quality of research by stifling open debate. But it’s one of the unintended results of tenure.

Tenure can, of course, protect people with unpopular views. Consider Edward Wilson and Arthur Jensen, eminent scholars at Harvard and Berkeley who have argued, among other things, that different groups of human beings exhibit average differences in genetically-mediated characteristics, including general intelligence and impulse control. Tenure protected their careers, although it didn’t protect them from death threats and intimidation.

On the other hand, it is likely that many more controversial scholars will never be hired in the first place because those on the hiring committee are hostile to their ideas.

Tenure also makes it much harder to terminate faculty members. It was never supposed to be a guarantee that one will never be fired. According to the American Association of University Professors, tenure can be revoked if members of a department can demonstrate that a colleague exhibits incompetence, or engages in academic fraud or seriously immoral behavior.

But even when these things can be shown, it is often easier for faculty and administration to ignore the problem than to mount a costly battle to fire a colleague.

This is one reason many tenure-track jobs are being replaced with adjunct positions, which is a temporary fix for a deeper problem. In the long run, it is likely that the quality of student education and faculty research would increase under a system that offered faculty a greater diversity of contracts, reflecting a faculty member’s ongoing accomplishments, experience, and contributions to the university.

In effect, tenure is a barrier to entry in the academic job market that makes it difficult to replace poorly performing faculty with better alternatives. We should applaud rather than protest the recent decision of the Wisconsin legislature to force the University of Wisconsin to experiment with new ways of conducting the business of hiring and firing faculty.

This post first appeared at the John William Pope Center. 

Jonathan Anomaly

Should We Fear the Era of Driverless Cars or Embrace the Coming Age of Autopilot? by Will Tippens

Driving kills more than 30,000 Americans every year. Wrecks cause billions of dollars in damages. The average commuter spends nearly 40 hours a year stuck in traffic and almost five years just driving in general.

But there is light at the end of the traffic-jammed tunnel: the driverless car. Thanks to millions of dollars in driverless technology investment by tech giants like Google and Tesla, the era of road rage, drunk driving, and wasted hours behind the wheel could be left in a cloud of dust within the next two decades.

Despite the immense potential of self-driving vehicles, commentators are already dourly warning that such automation will produce undesirable effects. As political blogger Scott Santens warns,

Driverless vehicles are coming, and they are coming fast…. As close as 2025 — that is in a mere 10 years — our advancing state of technology will begin disrupting our economy in ways we can’t even yet imagine. Human labor is increasingly unnecessary and even economically unviable compared to machine labor.

The problem, Santens says, is that there are “over 10 million American workers and their families whose incomes depend entirely or at least partially on the incomes of truck drivers.” These professional drivers will face unemployment within the next two decades due to self-driving vehicles.

Does this argument sound familiar?

These same objections have sprung up at every major stage of technological innovation since the Industrial Revolution, from the textile-working Luddites destroying looming machines in the 1810s to taxi drivers in 2015 smashing Uber cars.

Many assume that any initial job loss accompanying new technology harms the economy and further impoverishes the most vulnerable, whether fast food workers or truck drivers. It’s true that losing a job can be an individual hardship, but are these same pundits ready to denounce the creation of the light bulb as an economic scourge because it put the candle makers out of business?

Just as blacksmithing dwindled with the decline of the horse-drawn buggy, economic demand for certain jobs waxes and wanes. Jobs arise and continue to exist for the sole reason of satisfying consumer demands, and the consumer’s demands are continuously evolving. Once gas heating devices became available, most people decided that indoor fires were dirtier, costlier, and less effective at heating and cooking, so they switched. While the change temporarily disadvantaged those in the chimney-sweeping business, the added value of the gas stove vastly improved the quality of life for everyone, chimney sweeps included.

There were no auto mechanics before the automobile and no web designers before the Internet. It is impossible to predict all the new employment opportunities a technology will create beforehand. Countless jobs exist today that were unthinkable in 1995 — and 20 years from now, people will be employed in ways we cannot yet begin to imagine, with the driverless car as a key catalyst.

The historical perspective doesn’t assuage the naysayers. If some jobs can go extinct, couldn’t all jobs go extinct?

Yes, every job we now know could someday disappear — but so what? Specific jobs may come and go, but that doesn’t mean we will ever see a day when labor is no longer demanded.

Economist David Ricardo demonstrated in 1817 that each person has a comparative advantage due to different opportunity costs. Each person is useful, and no matter how unskilled he or she may be, there will always be something that each person has a special advantage in producing. When this diversity of ability and interest is coupled with the infinite creativity of freely acting individuals, new opportunities will always arise, no matter how far technology advances.

Neither jobs nor labor are ends in themselves — they are mere means to the goal of wealth production. This does not mean that every person is concerned only with getting rich, but as Henry Hazlitt wrote in Economics in One Lesson, real wealth consists in what is produced and consumed: the food we eat, the clothes we wear, the houses we live in. It is railways and roads and motor cars; ships and planes and factories; schools and churches and theaters; pianos, paintings and hooks.

In other words, wealth is the ability to fulfill subjective human desires, whether that means having fresh fruit at your local grocery or being able to easily get from point A to point B. Labor is simply a means to these ends. Technology, in turn, allows labor to become far more efficient, resulting in more wealth diffused throughout society.

Everyone knows that using a bulldozer to dig a ditch in an hour is preferable to having a whole team of workers spend all day digging it by hand. The “surplus” workers are now available to do something else in which they can produce more highly valued goods and services.  Over time, in an increasingly specialized economy, productivity rises and individuals are able to better serve one another through mutually beneficial exchanges in the market. This ongoing process of capital accumulation is the key to all meaningful prosperity and the reason all of humanity has seen an unprecedented rise in wealth, living standards, leisure, and health in the past two centuries.

Technology is always uncertain going forward. Aldous Huxley warned in 1927 that jukeboxes would put live artists out of business. Time magazine predicted the computer would wreak economic chaos in the 1960s.

Today, on the cusp of one of the biggest innovations since the Internet, there is, predictably, similar opposition. But those who wring their hands at the prospect of the driverless car fail to see that its greatest potential lies not in reducing pollution and road deaths, nor in lowering fuel costs and insurance rates, but rather in its ability to liberate billions of hours of human potential that truckers, taxi drivers, and commuters now devote to focusing on the road.

No one can know exactly what the future will look like, but we know where we have been, and we know the principles of human flourishing that have guided us here.

If society is a car, trade is the engine — and technology is the gas. It drives itself. Enjoy the ride.

Will Tippens

Will Tippens is a recent law school graduate living in Memphis.

RELATED ARTICLES:

The Roads of the Future Are Made of Plastic

Apple co-founder: Robots to own people as their pets – English Pravda.RU