Tag Archive for: Central Planning

Driverless Money by George Selgin

Last week I was contemplating a post having to do with driverless cars when, wouldn’t you know it, I received word that the Bank of England had just started a new blog called Bank Underground, and the first substantive post on it had to do with — you guessed it — driverless cars.

As it turned out, I needn’t have worried that Bank Underground had stolen my fire. The post, you see, was written by some employees in the Bank of England’s General Insurance Supervision Division, whose concern was that driverless cars might be bad news for the insurance industry.

The problem, as the Bank of England’s experts see it, is that cars like the ones that Google plans to introduce in 2020 are much better drivers than we humans happen to be — so much better, according to research cited in the post, that “the entire basis of motor insurance, which mainly exists because people crash, could … be upended.”

Driverless cars, therefore, threaten to “wipe out traditional motor insurance.”

It is, of course, a great relief to know that the Bank of England’s experts are keeping a sharp eye out for such threats to the insurance industry. (I suppose they must be working as we speak on some plan for addressing the dire possibility — let us hope it never comes to this — that cancer and other diseases will eventually be eradicated.)

But my own interest in driverless cars is rather different. So far as I’m concerned, the advent of such cars should have us all wondering, not about the future of the insurance industry, but about the future of…the Bank of England, or rather of it and all other central banks.

If driverless cars can upend “the entire basis of motor insurance,” then surely, I should think, an automatic or “driverless” monetary system ought to be capable of upending “the entire basis of monetary policy,” as such policy is presently conducted.

And that, so far as I’m concerned, would be a jolly good thing.

Am I drifting into science fiction? Let’s put matters in perspective. Although experiments involving driverless or “autonomous” cars have been going on for decades, until as recently as one decade ago, the suggestion that such cars would soon be, not only safe enough to replace conventional ones, but far safer, would have struck many people as fantastic.

Consider for a moment the vast array of contingencies such a vehicle must be capable of taking into account in order to avoid accidents and get passengers to some desired destination. Besides having to determine correct routes, follow their many twists and turns, obey traffic signals, and parallel park, they have to be capable of evading all sorts of unpredictable hazards, including other errant vehicles, not to mention jaywalkers and such.

The relevant variables are, in fact, innumerable. Yet using a combination of devices tech wizards have managed to overcome almost every hurdle, and will soon have overcome the few that remain.

All of this would be impressive enough even if human beings were excellent drivers. In fact, they are often very poor drivers indeed, which means that driverless cars are capable, not only of being just as good, but of being far better —  90 percent better, to be precise, since that’s the percentage of all car accidents attributable to human error.

Human beings are bad drivers for all sorts of reasons. They have to perform other tasks that take their mind off the road; their vision is sometimes impaired; they misjudge their own driving capabilities or the workings of their machines; some are sometimes inclined to show off, while others are dangerously timid. Occasionally, instead of relying on their wits, they drive “under the influence.”

Central bankers, being human, suffer from similar human foibles. They are distracted by the back-seat ululations of commercial bankers, exporters, finance ministers, and union leaders, among others. Their vision is at the same time both cloudy and subject to myopia.

Finally, few if any are able to escape altogether the disorienting influence of politics. The history of central banking is, by and large, a history of accidents, if not of tragic accidents, stemming from these and other sorts of human error.

It should not be so difficult, then, to imagine that a “driverless” monetary system might spare humanity such accidents, by guiding monetary policy more responsibly than human beings are capable of doing.

How complicated a challenge is this? Is it really more complicated than that involved in, say, driving from San Francisco to New York? Central bankers themselves like to think so, of course — just as most of us still like to believe that we are better drivers than any computer.

But let’s be reasonable. At bottom central bankers, in their monetary policy deliberations, have to make a decision concerning one thing, and one thing only: should they acquire or sell assets, and how many, or should they do neither?

Unlike a car, which has numerous controls — a steering wheel, signal lights, brakes, and an accelerator — a central bank has basically one, consisting of the instrument with which it adjusts the rate at which assets flow into or out of its balance sheet. Pretty simple.

And the flow itself? Here, to be sure, things get more complicated. What “target” should the central bank have in mind in determining the flow? Should it consist of a single variable, like the inflation rate, or of two or more variables, like inflation and unemployment? But the apparent complexity is, in my humble opinion, a result of confusion on monetary economists’ part, rather than of any genuine trade-offs central bankers face.

As Scott Sumner has been indefatigably arguing for some years now (and as I myself have long maintained), sound monetary policy isn’t a matter of having either a constant rate of inflation or any particular level of either employment or real output. It’s a matter of securing a stable flow of spending, or Nominal GNP, while leaving it to the marketplace to determine how that flow breaks down into separate real output and inflation-rate components.

Scott would have NGDP grow at an annual rate of 4-5 percent; I would be more comfortable with a rate of 2-3 percent. But this number is far less important to the achievement of macroeconomic stability than a commitment to keeping the rate — whatever it happens to be — stable and, therefore, predictable.

So: one goal, and one control. That’s much simpler than driving from San Francisco to New York. Heck, it’s simpler than managing the twists and turns of San Franscisco’s Lombard Street.

And the technology? In principle, one could program a computer to manage the necessary asset purchases or sales. That idea itself is an old one, Milton Friedman having contemplated it almost forty years ago, when computers were still relatively rare.

What Friedman could not have imagined then was a protocol like the one that controls the supply of bitcoins, which has the distinct advantage of being, not only automatic, but tamper-proof: once set going, no-one can easily alter it. The advantage of a bitcoin-style driverless monetary system is that it is, not only capable of steering itself, but incapable of being hijacked.

The bitcoin protocol itself allows the stock of bitcoins to grow at a predetermined and ever-diminishing rate, so that the stock of bitcoins will cease to grow as it approaches a limit of 21 million coins.

But all sorts of protocols may be possible, including ones that would adjust a currency’s supply growth according to its velocity — that is, the rate at which the currency is being spent — so as to maintain a steady flow of spending, à la Sumner. The growth rate could even be made to depend on market-based indicators of the likely future value of NGDP.

This isn’t to say that there aren’t any challenges yet to be overcome in designing a reliable “driverless money.” For one thing, the monetary system as a whole has to be functioning properly: just as a driverless car won’t work if the steering linkage is broken, a driverless monetary system won’t work if it’s so badly tuned that banks end up just sitting on any fresh reserves that come their way.

My point is rather that there’s no good reason for supposing that such challenges are any more insuperable than those against which the designers of driverless cars have prevailed. If driverless car technology has managed to take on San Francisco’s Lombard Street, I see no reason why driverless money technology couldn’t eventually tackle London’s.

What’s more, there is every reason to believe that driverless money would, if given a chance, prove to be far more beneficial to mankind than driverless cars ever will.

For although bad drivers cause plenty of accidents, none has yet managed to wreck an entire economy, as reckless central bankers have sometimes done. If driverless monetary systems merely served to avoid the worst macroeconomic pileups, that alone would be reason enough to favor them.

But they can surely do much better than that. Who knows: perhaps the day will come when, thanks to improvements in driverless monetary technology, central bankers will find themselves with nothing better to do than worry about the future of the hedge fund industry.

Cross-posted from Alt-M.org and Cato.org.

George Selgin

What Can the Government Steal? Anything It Pays For! by Daniel Bier

“…Nor shall private property be taken for public use, without just compensation.” – Fifth Amendment to the U.S. Constitution 

On Monday, I wrote about the Supreme Court’s decision in the case of Horne v. USDA, in which the Court ruled almost unanimously against the government’s attempt to confiscate a third of California raisin farmers’ crops without paying them a dime for it.

The confiscation was part of an absurd FDR-era program meant to increase the price of food crops by restricting the supply; the government would then sell or give away the raisins to foreign countries or other groups.

Overall, this ruling was a big win for property rights (or, at least, not the huge loss it could have been).

But there’s one issue that’s been overlooked here, and it relates to the Court’s previous decision in Kelo v. City of New London, the eminent domain case that also just turned 10 horrible years old yesterday.

In Horne, eight justices concluded that physically taking the farmers’ raisins and carting them away in trucks was, in fact, a “taking” under the Fifth Amendment that requires “just compensation.”

That sounds like common sense, but the Ninth Circuit Court of Appeals had ruled that the seizure wasn’t a taking that required compensation because, in their view, the Fifth Amendment gives less protection to “personal property” (i.e., stuff, like raisins or cars) than to “real property” (i.e., land).

The Court thankfully rejected this dangerous and illogical premise.

But while eight justices agreed on the basic question of the taking, only five agreed on the matter of just compensation.

The majority concluded that the government had to pay the farmers the current market value of the crops they wanted to take, which is standard procedure in a takings case (like when the government wants to take your home to build a road).

Justices Breyer (joined by Ginsburg and Kagan) wrote a partial dissent, arguing the federal government’s claim that the question of how much the farmers were owed should be sent back to the lower court to calculate what the farmers were owed.

Their curious reasoning was that, since the government was distorting the market and pushing up the market price of raisins, they should be able to subtract the value the farmers were getting from the artificially inflated price from the value of the raisins that were taken. The government argued that the farmers would actually end up getting more value than was taken from them, under this calculation.

Chief Justice Roberts, writing for the majority, derided this argument: “The best defense may be a good offense, but the Government cites no support for its hypothetical-based approach.”

But the most interesting part of this subplot came from Justice Thomas. Thomas fully agreed with Roberts’ majority opinion, but he wrote his own a one-page concurrence on the question of how to calculate “just compensation,” and it went right at the heart of Kelo.

In Kelo, a bare majority of the Court ruled that the government could seize people’s homes and give them to private developers, on the grounds that the government expected more taxes from the new development.

Marc Scribner explains how the Court managed to dilute the Fifth Amendment’s “public use” requirement into a “public purpose” excuse that allows the government to take property for almost any reason it can dream up.

Thomas’s concurrence disputes Breyer’s argument about calculating “just compensation” by pointing out that, had Kelo had been correctly decided, the government wouldn’t be allowed to take the farmers’ crops at all — even if it paid for them.

Thomas wrote (emphasis mine),

The Takings Clause prohibits the government from taking private property except “for public use,” even when it offers “just compensation.”

And quoting his dissent in Kelo:

That requirement, as originally understood, imposes a meaningful constraint on the power of the state — ”the government may take property only if it actually uses or gives the public a legal right to use the property.”

It is far from clear that the Raisin Administrative Committee’s conduct meets that standard. It takes the raisins of citizens and, among other things, gives them away or sells them to exporters, foreign importers, and foreign governments.

To the extent that the Committee is not taking the raisins “for public use,” having the Court of Appeals calculate “just compensation” in this case would be a fruitless exercise.

Unfortunately, Chief Justice Roberts is already writing as though the “public use” requirement was a dead letter, writing at one point in his opinion: “The Government correctly points out that a taking does not violate the Fifth Amendment unless there is no just compensation.”

But that isn’t true. A taking violates the Fifth Amendment, first and foremost, if it is not taken for “public use.” And confiscating raisins and giving them to foreign governments in order to keep the price of raisins in the United States artificially high does not, in any sane world, meet that standard.

What Thomas didn’t say, but clearly implied, was that the Court should have struck down the raisin-stealing scheme entirely, rather than just forcing the government pay for the crops it takes.

The Horne decision was good news, but it didn’t go far enough by actually imposing a meaningful limit on what counts as “public use.” The Court could have done that in this case, by overturning Kelo or at least adding somelimitations about what governments can lawfully take private property for.

Happily, Justice Thomas isn’t throwing in the towel on Kelo, and Justice Scalia has predicted that the decision will eventually be overturned.

So can the government still take your property for no good reason? Yes, for now. But at least they have to pay for it.

That’s not nothing. And for raisin farmers in California, it’s a whole lot.


Daniel Bier

Daniel Bier is the editor of Anything Peaceful. He writes on issues relating to science, civil liberties, and economic freedom.

Inequality: The Rhetoric and Reality by James A. Dorn

The publication of Thomas Piketty’s bestseller Capital in the Twenty-First Century has led to widespread attention on the rising gap between rich and poor, and to populist calls for government to redistribute income and wealth.

Purveyors of that rhetoric, however, overlook the reality that when the state plays a major role in leveling differences in income and wealth, economic freedom is eroded. The problem is, economic freedom is the true engine of progress for all people.

Income and wealth are created in the process of discovering and expanding new markets. Innovation and entrepreneurship extend the range of choices open to people. And yet not everyone is equal in their contribution to this process. There are differences among people in their abilities, motivations, and entrepreneurial talent, not to mention their life circumstances.

Those differences are the basis of comparative advantage and the gains from voluntary exchanges on private free markets. Both rich and poor gain from free markets; trade is not a zero- or negative-sum game.

Attacking the rich, as if they are guilty of some crime, and calling for state action to bring about a “fairer” distribution of income and wealth leads to an ethos of envy — certainly not one that supports the foundations of abundance: private property, personal responsibility, and freedom.

In an open market system, people who create new products and services prosper, as do consumers. Entrepreneurs create wealth and choices. The role of the state should be to safeguard rights to property and let markets flourish. When state power trumps free markets, choices are narrowed and opportunities for wealth creation are lost.

Throughout history, governments have discriminated against the rich, ultimately harming the poor. Central planning should have taught us that replacing private entrepreneurs with government bureaucrats merely politicizes economic life and concentrates power; it does not widen choices or increase income mobility.

Peter Bauer, a pioneer in development economics, recognized early on that “in a modern open society, the accumulation of wealth, especially great wealth, normally results from activities which extend the choices of others.”

Government has the power to coerce, but private entrepreneurs must persuade consumers to buy their products and convince investors to support their vision. The process of “creative destruction,” as described by Joseph Schumpeter, means that dynastic wealth is often short-lived.

Bauer preferred to use the term “economic differences” rather than “economic inequality.” He did so because he thought the former would convey more meaning than the latter. The rhetoric of inequality fosters populism and even extremism in the quest for egalitarian outcomes. In contrast, speaking of differences recognizes reality and reminds us that “differences in readiness to utilize economic opportunities — willingness to innovate, to assume risk, to organize — are highly significant in explaining economic differences in open societies.”

What interested Bauer was how to increase the range of choices open to people, not how to use government to reduce differences in income and wealth. As Bauer reminded us,

Political power implies the ability of rulers forcibly to restrict the choices open to those they rule. Enforced reduction or removal of economic differences emerging from voluntary arrangements extends and intensifies the inequality of coercive power.

Equal freedom under a just rule of law and limited government doesn’t mean that everyone will be equal in their endowments, motivations, or aptitudes. Disallowing those differences, however, destroys the driving force behind wealth creation and poverty reduction. There is no better example than China.

Under Mao Zedong, private entrepreneurs were outlawed, as was private property, which is the foundation of free markets. Slogans such as “Strike hard against the slightest sign of private ownership” allowed little room for improving the plight of the poor. The establishment of communes during the “Great Leap Forward” (1958–1961) and the centralization of economic decision making led to the Great Famine, ended civil society, and imposed an iron fence around individualism while following a policy of forced egalitarianism.

In contrast, China’s paramount leader Deng Xiaoping allowed the resurgence of markets and opened China to the outside world. Now the largest trading nation in the world, China has demonstrated that economic liberalization is the best cure for broadening people’s choices and has allowed hundreds of millions of people to lift themselves out of poverty.

Deng’s slogan “To get rich is glorious” is in stark contrast to Mao’s leveling schemes. In 1978, and as recently as 2002, there were no Chinese billionaires; today there are 220. That change would not have been possible without the development of China as a trading nation.

There are now 536 billionaires in the United States and growing animosity against the “1 percent” — especially by those who were harmed by the Great Recession. Nevertheless, polls have shown that most Americans think economic growth is far more important than capping the incomes of the very rich or narrowing the income gap. Only 3 percent of those polled by CBS and the New York Times in January thought that economic inequality was the primary problem facing the nation. Most Americans are more concerned with income mobility — that is, moving up the income ladder — then with penalizing success.

Regardless, some politicians will use inflammatory rhetoric to make differences between rich and poor the focus of their campaigns in the presidential election season. In doing so, they should recognize the risks that government intervention in the creation and distribution of income and wealth pose for a free society and for all-around prosperity.

Government policies can widen the gap between rich and poor through corporate welfare, through unconventional monetary policy that penalizes savers while pumping up asset prices, and through minimum wage laws and other legislation that price low-skilled workers out of the market and thus impede income mobility.

A positive program designed to foster economic growth — and leave people free to choose — by lowering marginal tax rates on labor and capital, reducing costly regulations, slowing the growth of government, and normalizing monetary policy would be the best medicine to benefit both rich and poor.


James A. Dorn

James A. Dorn is vice president for monetary studies, editor of the Cato Journal, senior fellow, and director of Cato’s annual monetary conference.

Real Heroes: A Good Samaritan in Cambodia by Lawrence W. Reed

In 30 years of traveling to 81 countries, I’ve come across some pretty nasty governments and some darn good people. To be fair, I should acknowledge that I’ve also encountered some rotten people and a half-decent government or two. The ghastliest of all worlds, of course, is when you have rotten people running nasty governments — a combination that is not in short supply.

Indeed, as Nobel laureate F.A. Hayek famously explained in The Road to Serfdom, the worst tend to rise to the top of all regimes — yet another reason to keep government small in the first place (as if we needed another reason).

“The unscrupulous and uninhibited,” wrote Hayek, “are likely to be more successful” in any society in which government dominates life and the economy. That’s precisely the kind of circumstance that elevates power over persuasion, force over cooperation, arrogance over humility, and corruption over honesty.

So I take special note when I encounter instances of good people working around, in spite of, in opposition to, or simply without a helping hand from government. In today’s dominant culture and climate, private initiative is frequently shortchanged or viewed with suspicion. In some quarters, “private” means unreliably compassionate, incorrigibly greedy, or hopelessly unplanned. We’re overdue for a celebration of the good character many people exhibit when there’s no fame or fortune in it, just the satisfaction that comes from knowing you’ve done the right thing.

Sadly, I can’t give you the name of the person I want to tell you about, and shame on me for that. I spent a grand total of perhaps an hour with him, in short increments as he gave me rides in his “cyclo” (or rickshaw) from one place to another in Phnom Penh, Cambodia, in August 1989. When I was about to fly home to the United States, I gave him something without ever expecting he would do with it what I asked. I wish I’d had the presence of mind to ask for his name and contact information because, in all the years since, I’ve wished for an opportunity to thank him.

I lived in Midland, Michigan, at the time. The area press, particularly theMidland Daily News and the Saginaw News, featured stories about my upcoming visit to Southeast Asia. Local doctors donated medical supplies for me to take to a hospital in the Cambodian capital. A woman named Sharon from a local church saw the news stories. She called me and explained that a few years before, her church had helped Cambodian families who escaped from the Khmer Rouge communists and resettled in mid-Michigan. The families had moved on to other locations in the United States but stayed in touch with the friends they had made in Midland.

Sharon told me that she sent copies of the news stories to her Cambodian friends her church had helped a few years before. Through Sharon, each family asked if I would take letters with cash enclosed to their desperately poor relatives in Cambodia. When they sent anything through the mail, it usually didn’t end up where it was supposed to, especially if cash was involved. I offered to do my best, with no guarantees.

The families who were in Phnom Penh would prove relatively easy to locate, but the last family was many miles away in Battambang. That would have involved a train ride, some personal risk, and a lot of time I didn’t have. If I couldn’t locate any of the families, I was advised not to bring the cash back home but to give it to any poor person. Finding poor Cambodians in 1989, after the savagery the nation endured under the butchery of the Khmer Rouge a decade before, was like looking for fish in an aquarium.

When I realized I wasn’t going to make it to Battambang, I approached a man in tattered clothes in the hotel lobby. I had seen him there a few times before. He always smiled and said hello, and spoke enough English to carry on some short conversations. I had a sense — intuition, perhaps — that he was a decent person.

“I have an envelope with a letter and $200 in it, intended for a very needy family in Battambang. Do you think you could get this to them?” I asked. He replied in the affirmative. “Keep $50 of it if you find them,” I instructed. We said goodbye. I assumed I would never hear anything of what became of either him or the money. I am pained to this day by the realization that without much thought, I had sold him short.

Back home in Michigan several months later, I received an excited phone call from Sharon. “The Cambodians in Virginia whose family in Battambang that last envelope was intended for just received a letter from their loved ones back home!” And then she read me a couple paragraphs from that letter. The final sentence read, “Thank you for the two hundred dollars!

That man whose name I’m unsure of and whose address I never secured had found his way to Battambang. Not only did he not keep the $50 I offered; he somehow had found a way to pay for the train ride himself. Does his act of honesty tug at your heartstrings? If it does, then you appreciate something the world desperately needs, something that is indispensably crucial to a free and moral society. The man I trusted the money to was poor in material wealth but rich in something more important. As I wrote in a recent book,

Ravaged by conflict, corruption and tyranny, the world is starving for people of character. Indeed, as much as anything, it is on this matter that the fate of individual liberty has always depended. A free society flourishes when people seek to be models of honor, honesty, and propriety at whatever the cost in material wealth, social status, or popularity. It descends into barbarism when they abandon what’s right in favor of self-gratification at the expense of others; when lying, cheating, or stealing are winked at instead of shunned.

If you want to be free, if you want to live in a free society, you must assign top priority to raising the caliber of your character and learning from those who already have it in spades. If you do not govern yourself, you will be governed.

Character means that there are no matters too small to handle the right way. It’s been said that your character is defined by what you do when no one is looking. Cutting corners because “it won’t matter much” or “no one will notice” still knocks your character down a notch and can easily become a slippery slope.

In 2016, I hope to visit Cambodia again. It will be my first time there since 1989. I have a slim lead on how I might find the man I gave that letter and $200 to. I know it’s a long shot. He may have moved away or passed on. But if I find him, it will be a thrill I’ll never forget.

I will embrace him as a brother and be sure he understands that in my book, he is one Real Hero.

For further information, see:


Lawrence W. Reed

Lawrence W. (“Larry”) Reed became president of FEE in 2008 after serving as chairman of its board of trustees in the 1990s and both writing and speaking for FEE since the late 1970s.

EDITORS NOTE: Each week, Mr. Reed will relate the stories of people whose choices and actions make them heroes. See the table of contents for previous installments.

Socialism Is War and War Is Socialism by Steven Horwitz

“[Economic] planning does not accidentally deteriorate into the militarization of the economy; it is the militarization of the economy.… When the story of the Left is seen in this light, the idea of economic planning begins to appear not only accidentally but inherently reactionary. The theory of planning was, from its inception, modeled after feudal and militaristic organizations. Elements of the Left tried to transform it into a radical program, to fit into a progressive revolutionary vision. But it doesn’t fit. Attempts to implement this theory invariably reveal its true nature. The practice of planning is nothing but the militarization of the economy.” — Don Lavoie, National Economic Planning: What Is Left?

Libertarians have long confounded our liberal and conservative friends by being both strongly in favor of free markets and strongly opposed to militarism and foreign intervention. In the conventional world of “right” and “left,” this combination makes no sense. Libertarians are often quick to point out the ways in which free trade, both within and across national borders, creates cooperative interdependencies among those who trade, thereby reducing the likelihood of war. The long classical liberal tradition is full of those who saw the connection between free trade and peace.

But there’s another side to the story, which is that socialism and economic planning have a long and close connection with war and militarization.

As Don Lavoie argues at length in his wonderful and underappreciated 1985 book National Economic Planning: What Is Left?, any attempt to substitute economic planning (whether comprehensive and central or piecemeal and decentralized) for markets inevitably ends up militarizing and regimenting the society. Lavoie points out that this outcome was not an accident. Much of the literature defending economic planning worked from a militaristic model. The “success” of economic planning associated with World War I provided early 20th century planners with a specific historical model from which to operate.

This connection should not surprise those who understand the idea of the market as a spontaneous order. As good economists from Adam Smith to F.A. Hayek and beyond have appreciated, markets are the products of human action but not human design. No one can consciously direct an economy. In fact, Hayek in particular argued that this is true not just of the economy, but of society in general: advanced commercial societies are spontaneous orders along many dimensions.

Market economies have no purpose of their own, or as Hayek put it, they are “ends-independent.” Markets are simply means by which people come together to pursue the various ends that each person or group has. You and I don’t have to agree on which goals are more or less important in order to participate in the market.

The same is true of other spontaneous orders. Consider language. We can both use English to construct sentences even if we wish to communicate different, or contradictory, things with the language.

One implication of seeing the economy as a spontaneous order is that it lacks a “collective purpose.” There is no single scale of values that guides us as a whole, and there is no process by which resources, including human resources, can be marshaled toward those collective purposes.

The absence of such a collective purpose or common scale of values is one factor that explains the connection between war and socialism. They share a desire to remake the spontaneous order of society into an organization with a single scale of values, or a specific purpose. In a war, the overarching goal of defeating the enemy obliterates the ends-independence of the market and requires that hierarchical control be exercised in order to direct resources toward the collective purpose of winning the war.

In socialism, the same holds true. To substitute economic planning for the market is to reorganize the economy to have a single set of ends that guides the planners as they allocate resources. Rather than being connected with each other by a shared set of means, as in private property, contracts, and market exchange, planning connects people by a shared set of ends. Inevitably, this will lead to hierarchy and militarization, because those ends require trying to force people to behave in ways that contribute to the ends’ realization. And as Hayek noted in The Road to Serfdom, it will also lead to government using propaganda to convince the public to share a set of values associated with some ends. We see this tactic in both war and socialism.

As Hayek also pointed out, this is an atavistic desire. It is a way for us to try to recapture the world of our evolutionary past, where we existed in small, homogeneous groups in which hierarchical organization with a common purpose was possible. Deep in our moral instincts is a desire to have the solidarity of a common purpose and to organize resources in a way that enables us to achieve it.

Socialism and war appeal to so many because they tap into an evolved desire to be part of a social order that looks like an extended family: the clan or tribe. Soldiers are not called “bands of brothers” and socialists don’t speak of “a brotherhood of man” by accident. Both groups use the same metaphor because it works. We are susceptible to it because most of our history as human beings was in bands of kin that were largely organized in this way.

Our desire for solidarity is also why calls for central planning on a smaller scale have often tried to claim their cause as the moral equivalent of war. This is true on both the left and right. We have had the War on Poverty, the War on Drugs, and the War on Terror, among others. And we are “fighting,” “combating,” and otherwise at war with our supposedly changing climate — not to mention those thought to be responsible for that change. The war metaphor is the siren song of those who would substitute hierarchy and militarism for decentralized power and peaceful interaction.

Both socialism and war are reactionary, not progressive. They are longings for an evolutionary past long gone, and one in which humans lived lives that were far worse than those we live today. Truly progressive thinking recognizes the limits of humanity’s ability to consciously construct and control the social world. It is humble in seeing how social norms, rules, and institutions that we did not consciously construct enable us to coordinate the actions of billions of anonymous actors in ways that enable them to create incredible complexity, prosperity, and peace.

The right and left do not realize that they are both making the same error. Libertarians understand that the shared processes of spontaneous orders like language and the market can enable all of us to achieve many of our individual desires without any of us dictating those values for others. By contrast, the right and left share a desire to impose their own sets of values on all of us and thereby fashion the world in their own images.

No wonder they don’t understand us.


Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

How Ice Cream Won the Cold War by B.K. Marcus

Richard Nixon stood by a lemon-yellow refrigerator in Moscow and bragged to the Soviet leader: “The American system,” he told Nikita Khrushchev over frosted cupcakes and chocolate layer cake, “is designed to take advantage of new inventions.”

It was the opening day of the American National Exhibition at Sokol’niki Park, and Nixon was representing not just the US government but also the latest products from General Mills, Whirlpool, and General Electric. Assisting him in what would come to be known as the “Kitchen Debates” were attractive American spokesmodels who demonstrated for the Russian crowd the best that capitalism in 1959 had to offer.

Capitalist lifestyle

“This was the first time,” writes British food historian Bee Wilson of the summer exhibition, that “many Russians had encountered the American lifestyle firsthand: the first time they … set eyes on big American refrigerators.”

Laughing and sometimes jabbing fingers at one another, the two men debated the merits of capitalism and communism. Which country had the more advanced technologies? Which way of life was better? The conversation … hinged not on weapons or the space race but on washing machines and kitchen gadgets. (Consider the Fork)

Khrushchev was dismissive. Yes, the Americans had brought some fancy machines with them, but did all this consumer technology actually offer any real advantages?

In his memoirs, he later recalled picking up an automatic lemon squeezer. “What a silly thing … Mr. Nixon! … I think it would take a housewife longer to use this gadget than it would for her to … slice a piece of lemon, drop it into a glass of tea, then squeeze a few drops.”

Producing necessities

That same year, Khrushchev announced that the Soviet economy would overtake the United States in the production of milk, meat, and butter. These were products that made sense to him. He couldn’t deliver — although Soviet farmers were forced to slaughter their breeding herds in an attempt to do so — but the goal itself reveals what the communist leader believed a healthy economy was supposed to do: produce staples like meat and dairy, not luxuries like colorful kitchenware and complex gadgetry for the decadent and lazy.

“Don’t you have a machine,” he asked Nixon, “that puts food in the mouth and presses it down? Many things you’ve shown us are interesting but they are not needed in life. They have no useful purpose. They are merely gadgets.”

Khrushchev was displaying the behavior Ludwig von Mises described in The Anti-Capitalistic Mentality. “They castigate the luxury, the stupidity and the moral corruption of the exploiting classes,” Mises wrote of the socialists. “In their eyes everything that is bad and ridiculous is bourgeois, and everything that is good and sublime is proletarian.”

On display that summer in Moscow was American consumer tech at its most bourgeois. The problem with “castigating the luxury,” as Mises pointed out, is that all “innovation is first a luxury of only a few people, until by degrees it comes into the reach of the many.”

Producing luxuries

It is appropriate that the Kitchen Debate over luxury versus necessity took place among high-end American refrigerators. Refrigeration, as a luxury, is ancient. “There were ice harvests in China before the first millennium BC,” writes Wilson. “Snow was sold in Athens beginning in the fifth century BC. Aristocrats of the seventeenth century spooned desserts from ice bowls, drank wine chilled with snow, and even ate iced creams and water ices. Yet it was only in the nineteenth century in the United States that ice became an industrial commodity.” Only with modern capitalism, in other words, does the luxury reach so rapidly beyond a tiny elite.

“Capitalism,” Mises wrote in Economic Freedom and Interventionism, “is essentially mass production for the satisfaction of the wants of the masses.”

The man responsible for bringing ice to the overheated multitude was a Boston businessman named Frederic Tudor. “History now knows him as ‘the Ice King,’” Steven Johnson writes of Tudor in How We Got to Now: Six Innovations That Made the Modern World, “but for most of his early adulthood he was an abject failure, albeit one with remarkable tenacity.”

Like many wealthy families in northern climes, the Tudors stored blocks of frozen lake water in icehouses, two-hundred-pound ice cubes that would remain marvelously unmelted until the hot summer months arrived, and a new ritual began: chipping off slices from the blocks to freshen drinks [and] make ice cream.

In 1800, when Frederic was 17, he accompanied his ill older brother to Cuba. They were hoping the tropical climate would improve his brother’s health, but it “had the opposite effect: arriving in Havana, the Tudor brothers were quickly overwhelmed by the muggy weather.” They reversed course, but the summer heat chased them back to the American South, and Frederic longed for the cooler climes of New England. That experience “suggested a radical — some would say preposterous — idea to young Frederic Tudor: if he could somehow transport ice from the frozen north to the West Indies, there would be an immense market for it.”

“In a country where at some seasons of the year the heat is almost unsupportable,” Tudor wrote in his journal, “ice must be considered as outdoing most other luxuries.”

Tudor’s folly

Imagine what an early 19th-century version of Khrushchev would have said to the future Ice King. People throughout the world go hungry, and you, Mr. Tudor, want to introduce frozen desserts to the tropics? What of beef? What of butter? The capitalists chase profits rather than producing the necessities.

It’s true that Tudor was pursuing profits, but his idea of ice outdoing “most other luxuries” looked to his contemporaries more like chasing folly than fortune.

The Boston Gazette reported on one of his first shiploads of New England ice: “No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.”

And at first the skeptics seemed right. Tudor “did manage to make some ice cream,” Johnson tells us. And that impressed a few of the locals. “But the trip was ultimately a complete failure.” The novelty of imported ice was just too novel. Why supply ice where there was simply no demand?

You can’t put a price on failure

In the early 20th century, economists Ludwig von Mises and F.A. Hayek, after years of debate with the Marxists, finally began to convince advocates of socialist central planning that market prices were essential to the rational allocation of scarce resources. Some socialist theorists responded with the idea of using capitalist market prices as a starting point for the central planners, who could then simulate the process of bidding for goods, thereby replacing real markets with an imitation that they believed would be just as good. Capitalism would then be obsolete, an unfortunate stage in the development of greater social justice.

By 1959, Khrushchev could claim, however questionably, that Soviet refrigerators were just as good as the American variety — except for a few frivolous features. But there wouldn’t have been any Soviet fridges at all if America hadn’t led the way in artificial refrigeration, starting with Tudor’s folly a century and a half earlier. If the central planners had been around in 1806 when the Boston Gazette poked fun at Tudor’s slippery speculation, what prices would they have used as the starting point for future innovation? All the smart money was in other ventures, and Tudor was on his way to losing his family’s fortune and landing in debtor’s prison.

Only through stubborn persistence did Tudor refine his idea and continue to innovate while demand slowly grew for what he had to offer.

“Still pursued by his creditors,” Johnson writes, Tudor

began making regular shipments to a state-of-the-art icehouse he had built in Havana, where an appetite for ice cream had been slowly maturing. Fifteen years after his original hunch, Tudor’s ice trade had finally turned a profit. By the 1820s, he had icehouses packed with frozen New England water all over the American South. By the 1830s, his ships were sailing to Rio and Bombay. (India would ultimately prove to be his most lucrative market.)

The world the Ice King made

In the winter of 1846–47, Henry David Thoreau watched a crew of Tudor’s ice cutters at work on Walden Pond.

Thoreau wrote, “The sweltering inhabitants of Charleston and New Orleans, of Madras and Bombay and Calcutta, drink at my well.… The pure Walden water is mingled with the sacred water of the Ganges.”

When Tudor died in 1864, Johnson tells us, he “had amassed a fortune worth more than $200 million in today’s dollars.”

The Ice King had also changed the fortunes of all Americans, and reshaped the country in the process. Khrushchev would later care about butter and beef, but before refrigerated train cars — originally cooled by natural ice — it didn’t matter how much meat and dairy an area could produce if it could only be consumed locally without spoiling. And only with the advent of the home icebox could families keep such products fresh. Artificial refrigeration created the modern city by allowing distant farms to feed the growing urban populations.

A hundred years after the Boston Gazette reported what turned out to be Tudor’s failed speculation, the New York Times would run a very different headline: “Ice Up to 40 Cents and a Famine in Sight”:

Not in sixteen years has New York faced such an iceless prospect as this year. In 1890 there was a great deal of trouble and the whole country had to be scoured for ice. Since then, however, the needs for ice have grown vastly, and a famine is a much more serious matter now than it was then.

“In less than a century,” Johnson observes, “ice had gone from a curiosity to a luxury to a necessity.”

The world that luxury made

Before modern markets, Mises tells us, the delay between luxury and necessity could take centuries, but “from its beginnings, capitalism displayed the tendency to shorten this time lag and finally to eliminate it almost entirely. This is not a merely accidental feature of capitalistic production; it is inherent in its very nature.” That’s why everyone today carries a smartphone — and in a couple of years, almost every wrist will bear a smartwatch.

The Cold War is over, and Khrushchev is no longer around to scoff, but the Kitchen Debate continues as the most visible commercial innovations produce “mere gadgets.” Less visible is the steady progress in the necessities, including the innovations we didn’t know were necessary because we weren’t imagining the future they would bring about. Even less evident are all the failures. We talk of profits, but losses drive innovation forward, too.

It’s easy to admire the advances that so clearly improve lives: ever lower infant mortality, ever greater nutrition, fewer dying from deadly diseases. It’s harder to see that the larger system of innovation is built on the quest for comfort, for entertainment, for what often looks like decadence. But the long view reveals that an innovator’s immediate goals don’t matter as much as the system that promotes innovation in the first place.

Even if we give Khrushchev the benefit of the doubt and assume that he really did care about feeding the masses and satisfying the most basic human needs, it’s clear the Soviet premier had no idea how economic development works. Progress is not driven by producing ever more butter; it is driven by ice cream.


B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

A Shrine to a Socialist Demagogue by Lawrence W. Reed

MANAGUA, Nicaragua — It’s May 27, 2015. Driving south on First Avenue toward Masaya on a hot, late-spring day in the Nicaraguan capital, my eye caught an image in the distance. “That looks like Curly from The Three Stooges!” I thought. Nah, what would he be doing here? Nyuk. Nyuk.

As we approached, I suddenly realized it only resembled Curly. It was actually somebody considerably less funny. The statue was a garish, tasteless manifestation of the late Venezuelan socialist strongman Hugo Chavez, surrounded by ugly, orange curlicues. I repressed the urge to gag as I stopped to take this photo:

Hugo Chavez shrine

This tribute to a man whose ceaseless demagoguery ruined his nation’s economy is the doing, of course, of Nicaraguan president Daniel Ortega and his party. Ortega, like Chavez, engineered constitutional changes that may make him effectively president for life. He has worshiped state power since the 1970s. He was a Cuban-trained Marxist and cofounder of the Frente Sandinista de Liberación Nacional, the Sandinistas. I visited the country five times in the 1980s to interview key political figures, and whenever I was there, Ortega was pushing government literacy programs; meanwhile, his government was harassing and shutting down the opposition press.

Back in the 1980s, Ortega relied heavily on subsidies from his Soviet and Cuban sponsors. But now that the Soviets are ancient history and the Cuban economy is on life support, he’s had to moderate. Nicaragua is a very poor country. Its per capita GDP is about a third of the world average, better than Yemen’s but not as deluxe as Uzbekistan’s. According to the 2015 Index of Economic Freedom, however, it’s ranked better than you might expect at 108th in the world. Seventy countries are actually less free.

Who do you think is ranked at the very bottom, at 176, 177, and 178?

None other than the workers’ paradises of Venezuela, Cuba, and North Korea.

If you want a glimpse of the current state of the Chavez/Maduro experiment in Venezuelan socialism, look no further than the relative scarcities of toilet paper (you’d better bring your own if you visit) and paper money (more abundant than ever at 510 percent inflation).

I asked my old friend Deroy Murdock, senior fellow with the Atlas Network, Fox News contributor, and keen observer of affairs in the Americas: How would you assess the legacy of the Venezuelan caudillo memorialized by Ortega’s regime in Nicaragua?

“Hugo Chavez arrived in Venezuela, determined to make his country a gleaming showcase of socialism, and renovate Cuba in the process,” Murdock said. “Now, Chavez is dead, Castro still lives, and both countries remain in dire straits. Chavez’s legacy is the enduring lesson that big government is bad, and huge government is even worse.”

Indeed. Seems pretty self-evident whether you look at the numbers from afar or walk the streets in person. Venezuela’s economy has been in free-fall for almost all of the past 15 years.

But there I was, gazing at a giant Hugo in Managua, a monument intended to say, “Way to go, man!” One wonders where an impoverished country gets the money or even the idea to construct such a hideous gargoyle.

Then I realized the answer: Ortega’s Nicaragua is run by socialists. And by typical socialist reasoning, you can be an architect of disaster but reckoned to be a “man of the people” just by claiming to be one.

If you produced the same results while advocating capitalism, you’d be reckoned a monster.


Lawrence W. Reed

Lawrence W. (“Larry”) Reed became president of FEE in 2008 after serving as chairman of its board of trustees in the 1990s and both writing and speaking for FEE since the late 1970s.

Kelo: Politicians Stole Her Home for Private Developers and Started a Legal War by Ilya Somin

Most of my new book, The Grasping Handfocuses on the broader legal and political issues raised by the Supreme Court’s ruling in Kelo v. City of New London.

As explained in the first post in this series, I wrote the book primarily to address these big-picture issues.

But the story of how such a momentous case arose from unlikely origins is interesting in its own right.

The case originated with a development project in the Fort Trumbull area of New London, a small city in Connecticut. The neighborhood had fallen on difficult economic times in the 1990s after the closure of a naval research facility.

City officials and others hoped to revitalize it. The administration of Republican Governor John Rowland hoped to expand his political base by promoting development in New London; but to avoid having to work directly through the heavily Democratic city government, they helped resuscitate the long-moribund New London Development Corporation, a private nonprofit organization established to aid the city with development planning.

The NLDC produced a development plan that would revitalize Fort Trumbull by building housing, office space, and other facilities that would support a new headquarters that Pfizer, Inc. – a major pharmaceutical firm – had agreed to build nearby.

The development plan produced by the NLDC was in large part based on Pfizer’s requirements, which NLDC leaders (some of whom had close ties to Pfizer) were eager to meet. Pfizer would not be the new owner of the redeveloped land, but did expect to benefit from it.

I believe that NLDC leaders genuinely thought the plan would serve the public interest, as did the city and state officials who supported it. But it is also true, as one of those who worked on the plan put it, that Pfizer was the “10,000-pound gorilla” behind the project.

In order to implement the plan, the NLDC sought to acquire land belonging to some ninety different Fort Trumbull property owners.

In 2000, the New London city council authorized the NLDC to use eminent domain to condemn the land of those who refused to sell. Some defenders of the takings emphasize that all but seven of the owners sold “voluntarily.”

But as New London’s counsel Wesley Horton noted in oral argument before the Supreme Court, many did so because there was “always in the background the possibility of being able to condemn… that obviously facilitates a lot of voluntary sales.”

Moreover, owners who were reluctant to sell were subjected to considerable harassment, such as late night phone calls, dumping of waste on their property, and locking out tenants during cold winter weather.

Seven individuals and families, who between them owned fifteen residential properties, refused to sell despite the pressure. One was Susette Kelo, who wanted to hold on to her “little pink house” near the waterfront.

Some of the other families involved had deep roots in the community and did not want to be forced out. Wilhelmina Dery, who was in her eighties, had lived in the same house her whole life, and wished to continue living there during the time left to her.

The Cristofaro family were also strongly attached to their property, which they had purchased in the 1970s after their previous home had been condemned as part of an urban renewal project.

Susette Kelo’s famous “little pink house” in 2004 (photo by Isaac Reese)The resisting property owners tried to use the political process to prevent the takings. They managed to attract the support of a wide range of people in the community, including many on the political left who believed that it was wrong to forcibly expel people from their homes in order to promote commercial development.

But the Coalition to Save Fort Trumbull organized by the resisters and their allies had little, if any, hope of prevailing against the vastly more powerful forces arrayed against them.

The owners also tried to hire lawyers to fight the taking in court. But the lawyers they approached told them that there was little chance of success, and that – in any event – they could not afford the necessary prolonged legal battle.

The owners would almost certainly have had to capitulate, if not for the intervention of the Institute for Justice, a libertarian public interest law firm. IJ had long been interested in promoting stronger judicial enforcement of “public use” limitations on takings, and one of the members of the Coalition reached out for help.

As IJ lawyer Scott Bullock put it, the Fort Trumbull situation was an “ideal public interest case” for the Institute. Legally, the case was a good one because the city did not claim that the property in question was “blighted” or otherwise causing harm, thereby making it harder to prove that condemnation would genuinely benefit the public.

The case also featured sympathetic plaintiffs who were determined to fight for their rights. That made it likely that it would play well in the court of public opinion, and that it would not be settled before it could lead to a precedent-setting decision.

IJ hoped to achieve a ruling holding that takings that transfer property from one private individual to another for “economic development” do not serve a genuine “public use” and are therefore unconstitutional.

Thanks to IJ’s pro bono legal representation, the case went to trial. In 2002, a Connecticut trial court invalidated the condemnation of 11 of the 15 properties because the city and the NLDC did not have a clear enough plan of what they intended to do with the land.

Both sides appealed to the Connecticut Supreme Court, which upheld all fifteen takings in a close 4-3 decision. The majority ruled that almost any public benefit counts as a “public use” under the state and federal constitutions, and that courts must generally defer to government planners.

In a dissenting opinion, Justice Peter Zarella argued that “the constitutionality of condemnations undertaken for the purpose of private economic development depends not only on the professed goals of the development plan, but also on the prospect of their achievement.”

Presciently, he warned, “The record contains scant evidence to suggest that the predicted public benefit will be realized with any reasonable certainty,” and that it was “impossible to determine whether future development of the area… will even benefit the public at all.”

At this point, most legal commentators (myself included) believed that the case was almost certainly over. Few thought that the federal Supreme Court was going to take a public use case.

Supreme Court precedent dating back to 1954 held that virtually any possible public benefit counts as a public use, and the Court had unanimously reaffirmed that view in 1984. Most experts thought that the debate over the meaning of “public use” had been definitively settled.

But Scott Bullock and Dana Berliner – the IJ lawyers who represented the property owners – thought the conventional wisdom was wrong. And they were vindicated when the Supreme Court unexpectedly agreed to take the case. At that point, much new national media attention was focused on the New London condemnations.

Property law experts were well aware that longstanding Supreme Court precedent permitted the government to take property for almost any reason. But very few members of the general public knew that. Many ordinary Americans were shocked to learn a city could condemn homes and small businesses in order to promote private development – a reality they were unaware of until the publicity surrounding Kelo drove it home to them.

The Supreme Court upheld the takings in a 5-4 ruling. But the resulting controversy created a major political backlash and shattered the seeming consensus in favor of a broad approach to public use.

As for the City of New London, Justice Zarella and other skeptics turned out to be right. The NLDC’s flawed development plan fell through, as did a number of later efforts. Richard Palmer, one of the state supreme court justices who voted with the majority, later apologized to Susette Kelo, telling her he “would have voted differently” had he known what would happen.

Today, the condemned land still lies empty, though city officials now plan to build a memorial park honoring the victims of eminent domain, on the former site of Susette Kelo’s house.

The former site of Susette Kelo’s house – May 2014 (photo by Ilya Somin)

In the meantime, feral cats have been using the property. So far, at least, they have been the main local beneficiaries of the takings.

Feral cat near the former site of the Kelo house – March 2011 (photo by Jackson Kuhl)

(I should point out that the events in New London leading up to the Supreme Court case are the subject of an excellent earlier book by journalist Jeff Benedict. My book primarily focuses on the broader legal and policy issues raised by the Kelo case, which Benedict touched on only briefly. But I also cover the origins of the case in Chapter 1, and post-decision developments in New London in the conclusion.)

This post first appeared on the Volokh Conspiracy, where Ilya Somin is a frequent blogger.

You can buy The Grasping Hand on Amazon here.


Ilya Somin

Ilya Somin is Professor of Law at George Mason University School of Law. He blogs at the Volokh Conspiracy.

Health Insurance Is Illegal by Warren C. Gibson

Health insurance is a crime. No, I’m not using a metaphor. I’m not saying it’s a mess, though it certainly is that. I’m saying it’s illegal to offer real health insurance in America. To see why, we need to understand what real insurance is and differentiate that from what we currently have.

Real insurance

Life is risky. When we pool our risks with others through insurance policies, we reduce the financial impact of unforeseen accidents or illness or premature death in return for a premium we willingly pay. I don’t regret the money I’ve spent on auto insurance during my first 55 years of driving, even though I’ve yet to file a claim.

Insurance originated among affinity groups such as churches or labor unions, but now most insurance is provided by large firms with economies of scale, some organized for profit and some not. Through trial and error, these companies have learned to reduce the problems of adverse selection and moral hazard to manageable levels.

A key word above is unforeseen.

If some circumstance is known, it’s not a risk and therefore cannot be the subject of genuine risk-pooling insurance. That’s why, prior to Obamacare, some insurance companies insisted that applicants share information about their physical condition. Those with preexisting conditions were turned down, invited to high-risk pools, or offered policies with higher premiums and higher deductibles.

Insurers are now forbidden to reject applicants due to preexisting conditions or to charge them higher rates.

They are also forbidden from charging different rates due to different health conditions — and from offering plans that exclude certain coverage items, many of which are not “unforeseen.”

In other words, it’s illegal to offer real health insurance.

Word games

Is all this just semantics? Not at all. What currently passes for health insurance in America is really just prepaid health care — on a kind of all-you-can-consume buffet card. The system is a series of cost-shifting schemes stitched together by various special interests. There is no price transparency. The resulting overconsumption makes premiums skyrocket, and health resources get misallocated relative to genuine wants and needs.

Lessons

Some lessons here are that genuine health insurance would offer enormous cost savings to ordinary people — and genuine benefits to policyholders. These plans would encourage thrift and consumer wisdom in health care planning,  while discouraging the overconsumption that makes prepaid health care unaffordable.

At this point, critics will object that private health insurance is a market failure because the refusal of unregulated private companies to insure preexisting conditions is a serious problem that can only be remedied by government coercion. The trouble with such claims is that no one knows what a real health insurance market would generate, particularly as the pre-Obamacare regime wasn’t anything close to being free.

What might a real, free-market health plan look like?

  • People would be able to buy less expensive plans from anywhere, particularly across state lines.
  • People would be able to buy catastrophic plans (real insurance) and set aside much more in tax-deferred medical savings accounts to use on out-of-pocket care.
  • People would very likely be able to buy noncancelable, portable policies to cover all unforeseen illnesses over the policyholder’s lifetime.
  • People would be able to leave costly coverage items off their policies — such as chiropractic or mental health — so that they could enjoy more affordable premiums.
  • People would not be encouraged by the tax code to get insurance through their employer.

What about babies born with serious conditions? Parents could buy policies to cover such problems prior to conception. What about parents whose genes predispose them to produce disabled offspring? They might have to pay more.

Of course, there will always be those who cannot or do not, for one reason or another, take such precautions. There is still a huge reservoir of charitable impulses and institutions in this country that could offer assistance. And these civil society organizations would be far more robust in a freer health care market.

The enemy of the good

Are these perfect solutions? By no means. Perfection is not possible, but market solutions compare very favorably to government solutions, especially over longer periods. Obamacare will continue to bring us unaccountable bureaucracies, shortages, rationing, discouraged doctors, and more.

Some imagine that prior to Obamacare, we had a free-market health insurance system, but the system was already severely hobbled by restrictions.

To name a few:

  • It was illegal to offer policies across state lines, which suppressed choices and increased prices, essentially cartelizing health insurance by state.
  • Employers were (and still are) given a tax break for providing health insurance (but not auto insurance) to their employees, reducing the incentive for covered employees to economize on health care while driving up prices for individual buyers. People stayed locked in jobs out of fear of losing health policies.
  • State regulators forbade policies that excluded certain coverage items, even if policyholders were amenable to such plans.
  • Many states made it illegal to price discriminate based on health status.
  • The law forbade associated health plans, which would allow organizations like churches or civic groups to pool risk and offer alternatives.
  • Medicaid and Medicare made up half of the health care system.

Of course, Obamacare fixed none of these problems.

Many voices are calling for the repeal of Obamacare, but few of those voices are offering the only solution that will work in the long term: complete separation of state and health care. That means no insurance regulation, no medical licensing, and ultimately, the abolition of Medicare and Medicaid, which threaten to wash future federal budgets in a sea of red ink.

Meanwhile, anything resembling real health insurance is illegal. And if you tried to offer it, they might throw you in jail.

Warren C. Gibson

Warren Gibson teaches engineering at Santa Clara University and economics at San Jose State University.

Is the “Austrian School” a Lie?

Is Austrian economics an American invention? by STEVEN HORWITZ and B.K. MARCUS.

Do those of us who use the word Austrian in its modern libertarian context misrepresent an intellectual tradition?

We trace our roots back through the 20th century’s F.A. Hayek and Ludwig von Mises (both served as advisors to FEE) to Carl Menger in late 19th-century Vienna, and even further back to such “proto-Austrians” as Frédéric Bastiat and Jean-Baptiste Say in the earlier 19th century and Richard Cantillon in the 18th. Sometimes we trace our heritage all the way back to the late-Scholastic School of Salamanca.

Nonsense, says Janek Wasserman in his article “Austrian Economics: Made in the USA”:

“Austrian Economics, as it is commonly understood today,” Wasserman claims, “was born seventy years ago this month.”

As his title implies, Wasserman is not talking about the publication of Principles of Economics by Carl Menger, the founder of the Austrian school. That occurred 144 years ago in Vienna. What happened 70 years ago in the United States was the publication of F.A. Hayek‘s Road to Serfdom.

What about everything that took place — most of it in Austria — in the 74 years before Hayek’s most famous book? According to Wasserman, the Austrian period of “Austrian Economics” produced a “robust intellectual heritage,” but the largely American period that followed was merely a “dogmatic political program,” one that “does a disservice to the eclectic intellectual history” of the true Austrian school.

Where modern Austrianism is “associated with laissez-faire economics and libertarianism,” the real representatives of the more politically diverse tradition — economists from the University of Vienna, such as Fritz Machlup, Joseph Schumpeter, and Oskar Morgenstern — were embarrassed by their association with Hayek’s bestseller and its capitalistic supporters.

These “native-born Austrians ceased to be ‘Austrian,'” writes Wasserman, “when Mises and a simplified Hayek captured the imagination of a small group of businessmen and radicals in the US.”

Wasserman describes the popular reception of the as “the birth of a movement — and the reduction of a tradition.”

Are we guilty of Wasserman’s charges? Do modern Austrians misunderstand our own tradition, or worse yet, misrepresent our history?

In fact, Wasserman himself is guilty of a profound misunderstanding of the Austrian label, as well as the tradition it refers to.

The “Austrian school” is not a name our school of thought took for itself. Rather it was an insult hurled against Carl Menger and his followers by the adherents of the dominant German Historical School.

The Methodenstreit was a more-than-decade-long debate in the late 19th century among German-speaking social scientists about the status of economic laws. The Germans advocated methodological collectivism, espoused the efficacy of government intervention to improve the economy, and, according Jörg Guido Hülsmann, “rejected economic ‘theory’ altogether.”

The Mengerians, in contrast, argued for methodological individualism and the scientific validity of economic law. The collectivist Germans labeled their opponents the “Austrian school” as a put-down. It was like calling Menger and company the “backwater school” of economic thought.

“Austrian,” in our context, is a reclaimed word.

But more important, modern Austrian economics is not the dogmatic ideology that Wasserman describes. In his blog post, he provides no actual information about the work being done by the dozens of active Austrian economists in academia, with tenured positions at colleges and universities whose names are recognizable.

He tells his readers nothing about the  books they have produced that have been published by top university presses. He does not mention that they have published in top peer-reviewed journals in the economics discipline, as well as in philosophy and political science, or that the Society for the Development of Austrian Economics consistently packs meeting rooms at the Southern Economic Association meetings.

Have all of these university presses, top journals, and long-standing professional societies, not to mention tenure committees at dozens of universities, simply lost their collective minds and allowed themselves to be snookered by an ideological sleeper cell?

Or perhaps in his zeal to score ideological points of his own, Wasserman chose to take his understanding of Austrian economics from those who consume it on the Internet and elsewhere rather than doing the hard work of finding out what professional economists associated with the school are producing. Full of confirmation bias, he found what he “knew” was out there, and he ends up offering a caricature of the robust intellectual movement that is the contemporary version of the school.

The modern Austrian school, which has now returned to the Continent and spread across the globe after decades in America, is not the dogmatic monolith Wasserman contends. The school is alive with both internal debates about its methodology and theoretical propositions and debates about its relationship to the rest of the economics discipline, not to mention the size of the state.

Modern Austrian economists are constantly finding new ideas to mix in with the work of Menger, Böhm-Bawerk, Mises, and Hayek. The most interesting work done by Austrians right now is bringing in insights from Nobelists like James Buchanan, Elinor Ostrom, and Vernon Smith, and letting those marinate with their long-standing intellectual tradition. That is hardly the behavior of a “dogmatic political program,” but is rather a sign of precisely the robust intellectual tradition that has been at the core of Austrian economics from Menger onward.

That said, Wasserman is right to suggest that economic science is not the same thing as political philosophy — and it’s true that many self-described Austrians aren’t always careful to communicate the distinction. Again, Wasserman could have seen this point made by more thoughtful Austrians if he had gone to a basic academic source like the Concise Encyclopedia of Economics and read the entry on the Austrian school of economics.

Even a little bit of actual research motivated by actual curiosity about what contemporary professional economists working in the Austrian tradition are doing would have given Wasserman a very different picture of modern Austrian economics. That more accurate picture is one very much consistent with our Viennese predecessors.

To suggest that we do a disservice to our tradition — or worse, that we have appropriated a history that doesn’t belong to us — is to malign not just modern Austrians but also the Austrian-born antecedents within our tradition.

Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

8 Goofs in Jonathan Gruber’s Health Care Reform Book

This Obamacare architect’s propaganda piece is a comic of errors by MATT PALUMBO:

In one of life’s bitter ironies, I recently found a book by Jonathan Gruber in the bin of a bookstore’s going-out-of-business sale. It’s called Health Care Reform: What It Is, Why It’s Necessary, How It Works. Interestingly, the book is a comic, which made it a quick read. It’s just the sort of thing that omniscient academics write to persuade ordinary people that their big plans are worth pursuing.

Health Care Reform: What It Is, Why It’s Necessary, How It Works

In case you’ve forgotten — and to compound the irony — Gruber is the Obamacare architect who received negative media attention recently for some controversial comments about the stupidity of the average American voter. In Health Care Reform, Gruber focuses mainly on two topics: an attempted diagnosis of the American health care system, and how the Affordable Care Act (the ACA, or Obamacare) will solve them. I could write a PhD thesis on the myriad fallacies, half-truths, and myths propounded throughout the book. But instead, let’s explore eight of Gruber’s major errors.

Error 1: The mandate forcing individuals to buy health insurance is just like forcing people to buy car insurance, which nobody questions.

This is a disanalogy — and an important one. A person has to purchase car insurance only if he or she gets a car. The individual health insurance mandate forces one to purchase health insurance no matter what. Moreover, what all states but three require for cars is liability insurance, which covers accidents that cause property damage and/or bodily injury. Technically speaking, you’re only required to have insurance to cover damages you might impose on others. If an accident is my fault, liability insurance covers the other individual’s expenses, not my own, and vice versa.

By contrast, if the other driver and I each had collision insurance, we would both be covered for vehicle damage regardless of who was at fault. If collision insurance were mandated, the comparison to health insurance might be apt, because, as with health insurance, collision covers damage to oneself. But no states require collision insurance.

Gruber wants to compare health insurance to car insurance primarily because (1) he wants you to find the mandate unobjectionable, and (2) he wants you to think of the young uninsured (those out of the risk pool) as being sort of like uninsured drivers — people who impose costs on others due to accidents.

But not only is the comparison inapt, Gruber’s real goal is to transfer resources from those least likely to need care (younger, poorer people) to those most likely to need care (older, richer people). The only way mandating health insurance could be like mandating liability car insurance is in preventing the uninsured from shifting the costs of emergent care thanks to federal law. We’ll discuss that as a separate error, next.

Error 2: The emergency room loophole is responsible for increases in health insurance premiums.

In 1986, Reagan passed the Emergency Medical Treatment and Active Labor Act, one provision of which was that hospitals couldn’t reject emergency care to anyone regardless of their ability to pay. This act created the “emergency room loophole,” which allows many uninsured individuals to receive care without paying.

The emergency room loophole does, indeed, increase premiums. There is no free lunch. The uninsured who use emergency rooms can’t pay the bills, and the costs are thus passed on to the insured. So why do I consider this point an error? Because Gruber overstates its role in increasing premiums. “Ever wonder why your insurance premiums keep going up?” he asks rhetorically, as if this loophole is among the primary reasons for premium inflation.

The reality is, spending on emergency rooms (for both the uninsured and the insured) only accounts forroughly 2 percent of all health care spending. Claiming that health insurance premiums keep rising due to something that accounts for 2 percent of health care expenses is like attributing the high price of Starbucks drinks to the cost of their paper cups.

Error 3: Medical bills are the No.1 cause of individual bankruptcies.

Gruber doesn’t include a single reference in the book, so it’s hard to know where he’s getting his information. Those lamenting the problem of medical bankruptcy almost always rely on a 2007 studyconducted by David Himmelstein, Elizabeth Warren, and two other researchers. The authors offered the shocking conclusion that 62 percent of all bankruptcies are due to medical costs.

But in the same study, the authors also claimed that 78 percent of those who went bankrupt actually had insurance, so it would be strange for Gruber to claim the ACA would solve this problem. While it would be unfair to conclude definitively that Gruber relied on this study for his uncited claims, it is one of the only studies I am aware of that could support his claim.

More troublingly, perhaps, a bankruptcy study by the Department of Justice — which had a sample size five times larger than Himmelstein and Warren’s study — found that 54 percent of bankruptcies have no medical debt, and 90 percent have debt under $5,000. A handful of studies that contradict Himmelstein and Warren’s findings include studies by Aparna Mathur at the American Enterprise Institute; David Dranove and Michael Millenson of Northwestern University; Scott Fay, Erik Hurst, and Michelle White (at the universities of Florida, Chicago, and San Diego, respectively); and David Gross of Compass Lexecon and Nicholas Souleles of the University of Pennsylvania.

Why are Himmelstein and Warren’s findings so radically different? Aside from the fact that their study was funded by an organization called Physicians for a National Health Program, the study was incredibly liberal about what it defined as a medical bankruptcy. The study considered any bankruptcy with any amount of medical debt as a medical bankruptcy. Declare bankruptcy with $100,000 in credit card debt and $5 in medical debt? That’s a medical bankruptcy, of course. In fact, only 27 percent of those surveyed in the study had unreimbursed medical debt exceeding $1,000 in the two years prior to declaring bankruptcy.

David Dranove and Michael L. Millenson at the Kellogg School of Management reexamined the Himmelstein and Warren study and could only find a causal relationship between medical bills and bankruptcy in 17 percent of the cases surveyed. By contrast, in Canada’s socialized medical system, the percentage of bankruptcies due to medical expenses is estimated at between 7.1 percent and 14.3 percent. One wonders if the Himmelstein and Warren study was designed to generate a narrative that self-insurance (going uninsured) causes widespread bankruptcy.

Error 4: 20,000 people die each year because they don’t have the insurance to pay for treatment.

If the study this estimate was based on were a person, it could legally buy a beer at a bar. Twenty-one years ago, the American Medical Association released a study estimating the mortality rate of the uninsured to be 25 percent higher than that of the insured. Thus, calculating how many die each year due to a lack of insurance is determined by the number of insured and extrapolating from there how many would die in a given year with the knowledge that they’re 25 percent more likely to die than an insured person.

Even assuming that the 25 percent statistic holds true today, not all insurance is equal. As Gruber notes on page 74 of his book, the ACA is the biggest expansion of public insurance since the creation of Medicare and Medicaid in 1965, as 11 million Americans will be added to Medicaid because of the ACA. So how does the health of the uninsured compare with those on Medicaid? Quite similarly. As indicated by the results from a two-year study in Oregon that looked at the health outcomes of previously uninsured individuals who gained access to Medicaid, Medicaid “generated no significant improvement in measured physical health outcomes.” Medicaid is more of a financial cushion than anything else.

So with our faith in the AMA study intact, all that would happen is a shift in deaths from the “uninsured” to the “publicly insured.” But the figure is still dubious at best. Those who are uninsured could also suffer from various mortality-increasing traits that the insured lack. As Megan McArdle elaborates on these lurking third variables,

Some of the differences we know about: the uninsured are poorer, more likely to be unemployed or marginally employed, and to be single, and to be immigrants, and so forth. And being poor, and unemployed, and from another country, are all themselves correlated with dying sooner.

Error 5: The largest uninsured group is the working poor.

Before Obamacare, had you ever heard that there are 45 million uninsured Americans? It’s baloney. In 2006, 17 million of the uninsured had incomes above $50,000 a year, and eight million of those earned more than $75,000 a year. According to one estimate from 2009, between 12 million and 14 million were eligible for government assistance but simply hadn’t signed up. Another estimate from the same source notes that between 9 million and 10 million of the uninsured are not American citizens. According to the Centers for Disease Control and Prevention, slightly fewer than 8 million of the uninsured are aged 18–24, the group that requires the least amount of medical care and has an average annual income of slightly more than $30,000.

Thus, the largest group of uninsured is not the working poor. It’s the middle class, upper middle class, illegal immigrants, and the young. The working poor who are uninsured are often eligible for assistance but don’t take advantage of it. I recognize that some of these numbers may seem somewhat outdated (the sources for all of them can be found here), but remember: we’re taking account of the erroneous ways Gruber and Obamacare advocates sold the ACA to “stupid” Americans.

Error 6: The ACA will have no impact on premiums in the short term, according to the CBO.

Interesting that there’s no mention of what will happen in the long run. Regardless, not only have there already been premium increases, one widely reported consequence of the ACA has been increases in deductibles. If I told you that I could offer you an insurance plan for a dollar a year, it would seem like a great deal. If I offered you a plan for a dollar a year with a $1 million deductible, you might not think it’s such a great deal.

A report from PricewaterhouseCoopers’ Health Research Institute found that the average cost of a plan sold on the ACA’s exchanges was 4 percent less than the average for an employer-provided plan with similar benefits ($5,844 vs. $6,119), but the deductibles for the ACA plans were 42 percent higher ($5,081 vs. $3,589). The ACA is thus able to swap one form of sticker shock (high premiums) for another (high deductibles). Let us not forget that the ACA exchanges receive federal subsidies. Someone has to pay for those, too.

Error 7: A pay-for-performance model in health care would increase quality and reduce costs.

This proposal seems like common sense in theory, but it’s questionable in reality. Many conservatives and libertarians want a similar model for education, so some might be sympathetic to this aspect of Gruber’s proposal. But there is enormous difficulty in determining how we are to rank doctors.

People respond to incentives, but sometimes these incentives are perverse. Take the example of New York, which introduced a system of “scorecards” to rank cardiologists by the mortality rates of their patients who received coronary angioplasty, a procedure to treat heart disease. Doctors paid attention to their scorecards, and they obviously could increase their ratings by performing more effective surgeries. But as Charles Wheelan noted in his book Naked Statistics, there was another way to improve your scorecard: refuse surgeries on the sickest patients, or in other words, those most likely to die even with care. Wheelan cites a survey of cardiologists regarding the scorecards, where 83 percent stated that due to public mortality statistics, “some patients who might benefit from angioplasty might not receive the procedure.”

Error 8: The ACA “allows you to keep your current policy if you like it… even if it doesn’t meet minimum standards.”

What, does this guy think we’re stupid or something?

Mojitos in Havana?

Free movement of people and products will help liberate Cuba by ROBERT RAMSEY:

This entire policy shift … is based on an illusion, on a lie — the lie and illusion that more commerce, more access to money and goods will translate to political freedom for the Cuban people.” — Senator Marco Rubio (R-FL)

I don’t know as much about Cuba as Senator Rubio does.  I am sure that his hatred of the Castro regime there is justified; in their attempts to produce a perfect and harmonious society in Cuba, they have perpetrated countless crimes against humanity — and against members of his own family. Indeed, I would consider him to be an expert on the sentiments of those opposed to Castro’s regime and its policies. But he is wrong in his belief that trade of any kind will simply result in the Castro regime becoming stronger and more entrenched.

Our current policy towards Cuba is this: cut them off completely on their island, don’t let them have any imports, and wait for the Castros to die. Then, hopefully, the Cuban citizens will rise up against their communist overlords, see that we Americans have Duck Dynasty and Taco Bell, then beg for us to come set up a government for them, or something along those lines.

We’ve been doing this for decades. There is no evidence whatsoever that this policy is working. However, there are quite a few examples of anti-US countries catching the capitalist bug and mellowing their position considerably, as well as beginning to protect human rights.  The greatest example is probably Vietnam.

Life in Vietnam in the decade following the Vietnam War was, by all accounts, horrifying. It’s a classic tale of a communist regime killing hundreds of thousands of its own citizens in an attempt to make a perfect society. Millions were displaced, and innumerable others died at sea attempting to flee in makeshift rafts. By 1986, however, the original leaders of the regime had either died or been replaced by reformers who instituted a policy of Doi Moi (open door), which slowly began to introduce reforms friendly to markets. The results have been astounding.

A picture of modern Vietnam: consistent GDP growth of around 5.5 percent for the past decade (unlike many of its neighbors, whose growth bounces up and down with every year); unemployment around 2 percent; low inflation; and a rapidly growing financial sector. As of 2007, Vietnam is a member of the World Trade Organization, and entrepreneurs have become one of the most powerful forces within the country. Quality of life has increased dramatically, and all the trappings of a modern economy can be found throughout most of the country.

Relations with the United States have improved dramatically as well: the United States is its primary trading partner. Tourism has exploded: last year Vietnam saw 6.8 million visitors, and that number shows no signs of shrinking.

Civil rights have developed to a degree, and while the country is still run much as China is, with a single socialist party and the danger of being arrested if one speaks out too much, gone are the days of mass executions. Progress in this area is thus slow, but it’s steady.

There’s no guarantee the same thing will happen in Cuba, but a little capitalism goes a long way.  It won’t be long before American tourists flock to Cuba; it’s an hour’s flight from Miami and has been recognized by Americans for well over a century now as an island destination.

Fat American tourists bring fat American wallets, and whether the Castros like it or not, a thriving economy will spring up around tourism. Even if US policy liberalization stops with ending the ban on travel — even, that is, if the foolish embargo isn’t about to be lifted — change will come to Cuba. And a freer market is going to bring it.

ABOUT ROBERT RAMSEY

Robert Ramsey is the website curator at FEE. He loves cooking, writing, and hacking in his spare time.