Posts

The New Paganism? The Case against Pope Francis’s Green Encyclical by Max Borders

Paganism as a distinct and separate religion may perhaps be said to have died, although, driven out of the cities, it found refuge in the countryside, where it lingered long — and whence, indeed, its very name is derived. In a very real sense, however, it never died at all. It was only transformed and absorbed into Christianity. – James Westfall Thompson, An Introduction to Medieval Europe

In 2003, science-fiction writer Michael Crichton warned a San Francisco audience about the sacralization of the environment. Drawing an analogy between religion and environmentalism, Crichton said:

There’s an initial Eden, a paradise, a state of grace and unity with nature, there’s a fall from grace into a state of pollution as a result of eating from the tree of knowledge, and as a result of our actions there is a judgment day coming for us all.

We are all energy sinners, doomed to die, unless we seek salvation, which is now called sustainability. Sustainability is salvation in the church of the environment. Just as organic food is its communion, that pesticide-free wafer that the right people with the right beliefs, imbibe.

This analogy between religion and environmentalism is no longer a mere analogy.

Pope Francis, the highest authority in the Catholic Church — to whom many faithful look for spiritual guidance — has now fused church doctrine with environmental doctrine.

Let’s consider pieces of his recently released Encyclical Letter. One is reminded of a history in which the ideas of paganism (including the worship of nature) were incorporated into the growing medieval Church.

Excerpts from Pope Francis are shown in italics.


 

This sister protests the evil that we provoke, because of the irresponsible use and of the abuse of the goods that God has placed in her. We grew up thinking that we were its owners and rulers, allowed to plunder it.

Notice how Pope Francis turns the earth into a person. Sister. Mother. This kind of anthropomorphic trope is designed to make you think that, by virtue of driving your car, you’re also smacking your sibling. We’ve gone from “dominion over the animals and crawling things” to “plundering” our sister.

The violence that exists in the human heart wounded by sin is also manifested in the symptoms of the disease we feel in soil, water, air and in the living things. Therefore, among the most abandoned and ill treated poor we find our oppressed and devastated Earth, which “moans and suffers the pains of childbirth” [Romans 8:22].

First, if the state of the soil, water and air and living things is indeed symptomatic of our violent, sinful hearts, then the good news is that sin is on the decline. On every dimension the Pope names, the symptoms of environmental harm are getting better all the time — at least in our decadent capitalist country.

Do not take it on faith: here are data.

There are forms of pollution which affect people every day. The exposure to air pollutants produces a large spectrum of health effects, in particular on the most poor, and causes millions of premature deaths.

This will always be true to some degree, of course, but it’s less true than any time in human history. Pope Francis fails to acknowledge the tremendous gains humanity has made. For example, human life expectancy in the Paleolithic period (call this “Eden”) was 33 years. Life expectancy in the neolithic period was 20 years. Globally, life expectancy is now more than 68 years, and in the West, it is passing 79 years.

Yes, there is pollution, and, yes, the poor are affected by it. But the reason why the poor are affected most by air pollution is because they’re poor — and because they don’t have access to fossil fuel energy. Pope Francis never bothers to draw the connection between wealth and health because he thinks of both production and consumption as sinful. Brad Plumer writes at Vox,

About 3 billion people around the world — mostly in Africa and Asia, and mostly very poor — still cook and heat their homes by burning coal, charcoal, dung, wood, or plant residue in their homes. These homes often have poor ventilation, and the smoke can cause all sorts of respiratory diseases.

The wealthy people of the West, including Pope Francis, don’t suffer from this problem. That’s because liberal capitalist countries — i.e., those countries who “plunder” their sister earth — do not suffer from energy poverty. They do not suffer from inhaling fumes and particulate matter from burning dung becausethey are “sinful,” because they are capitalist.

See the problem? The Pope wants to have it both ways. He has confused the disease (unhealthy indoor air pollution) with the cure (cheap, clean, abundant and mass-produced energy from fossil fuels).

Add to that the pollution that affects all, caused by transportation, by industrial fumes, by the discharge of substances which contribute to the acidification of soil and water, by fertilizers, insecticides, fungicides, herbicides and toxic pesticides in general. The technology, which, connected to finance, claims to be the only solution to these problems, in fact is not capable of seeing the mystery of the multiple relationships which exist between things, and because of this, sometimes solves a problem by creating another.

It is strange to read admonitions from someone about the “multiple relationships that exist between things,” only to see him ignore those relationships in the same paragraph. Yes, humans often create problems by solving others, but that doesn’t mean we shouldn’t solve the problems. It just means we should solve the big problems and then work on the smaller ones.

Solving problems even as we discover different problems is an inherent part of the human condition. Our creativity and innovation and struggle to overcome the hand nature has dealt us is what makes us unique as a species.

Perhaps this is, for Pope Francis, some sort of Green Original Sin: “Thou shalt just deal with it.” But to the rest of us, it is the means by which we live happier, more comfortable lives here under the firmament.

The Earth, our home, seems to turn more and more into a huge garbage dump. In many places on the planet, the elderly remember with nostalgia the landscapes of the past, which now appear to be submerged in junk.

If you get your understanding of waste management and the environment from the movie Wall-E, then you might have the impression that we’re burying our sister in garbage. But as the guys over at EconPop have pointed out, land used for waste management is also governed by laws of supply and demand — which means entrepreneurs and innovators are finding better and less expensive ways to reuse, reduce, recycle, and manage our waste.

The industrial waste as well as the chemicals used in cities and fields can produce an effect of bio-accumulation in the bodies of the inhabitants of neighboring areas, which occurs even when the amount of a toxic element in a given place is low. Many times one takes action only when these produced irreversible effects on people’s health.

People, on net, are living longer and healthier than they ever have in the history of our species. What evidence does the Holy Father have that irreversible effects on people’s health rises to the level of an emergency that demands drafting in a papal encyclical? And why focus on the costs of “chemicals” without a single mention of overwhelming their human benefit? Indeed, which chemicals? This kind of sloppy thinking is rather unbecoming of someone who is (we are constantly reminded) a trained chemist.

Certain substances can have health effects, but so can failing to produce the life-enhancing goods in the first place. The answer is not to beg forgiveness for using soaps and plastics (or whatever), but to develop the institutions that prevent people and companies from imposing harmful costs onto others without taking responsibility for it.

The key is to consider the trade-offs that we will face no matter what, not to condemn and banish “impure” and unnatural substances from our lives.

These issues are intimately linked to the culture of waste, affecting so much the human beings left behind when the things turn quickly into trash.

Now we’re getting somewhere. This is where Pope Francis would like to add consumerism to production on the list of environmentally deadly sins.

Let us realize, for example, that most of the paper that is produced is thrown away and not recycled.

Heaven forfend! So would Pope Francis have us burn fossil fuels to go around and collect processed pulp? Is he unaware that demand for paper is what drivesthe supply of new trees? We aren’t running out of trees because we throw away paper. The Pope’s plan sounds like it could have been hatched in Berkeley, California, instead of Vatican City. And yet worlds have collided.

Michael Munger puts matters a little differently:

Mandatory recycling, by definition, takes material that would not be recycled voluntarily, diverts it from the waste stream, and handles it several times before using it again in a way that wastes resources.

The only explanation for this behavior that I can think of is a religious ceremony, a sacrifice of resources as a form of worship. I have no problem if people want to do that. As religions go, it is fairly benign. Butrequiring that religious sacrifice of resources is a violation of the constitutional separation of church and state.

Well, Professor Munger, this is the Pope we’re talking about.

We find it hard to admit that the operation of natural ecosystems is exemplary: plants synthesize nutrients that feed the herbivores; these in turn feed the carnivores, which provide a lot of organic waste, which give rise to a new generation of plants. In contrast, the industrial system, at the end of its cycle of production and consumption, has not developed the ability to absorb and reuse waste and slag.

Where is the evidence for this? These are matters of faith, indeed. All this time I thought the industrial system did have the ability to absorb and reuse waste: It’s called the system of prices, property, and profit/loss. The problem is not that such a “recycling” system doesn’t exist, it’s that corruption and government distorts the system of property, prices and profit/loss so that our economic ecosystem doesn’t operate as it should.

Indeed, when you have the Pope suggesting we burn gas to save glass, you have to wonder why the industrial system is so messed up. A system that “requires us to limit the use of non-renewable resources, to moderate consumption, to maximize the efficiency of the exploitation, to reuse and to recycle,” is called the market. And where it doesn’t exist is where you’ll find the worst instances of corruption and environmental degradation.

Then, of course, there’s climate change. In the interests of brevity I won’t quote the whole thing. But here’s the punchline, which might have been plucked straight from the IPCC Summary for Policymakers:

Climate change is a global problem with serious environmental, social, economic, distribution and policy implications, and make up one of the main current challenges for humanity. The heaviest impacts will probably fall in the coming decades upon developing countries.

This might be true. What the Holy Father fails to appreciate is that the heaviest impacts of policies designed to mitigate climate change will definitely fall upon developing countries. (That is, if the developing countries swear off cheap energy and embrace any sort of global climate treaty. If history is a guide, they most certainly will not.)

Meanwhile, the biggest benefits of burning more carbon-based fossil fuels will accrue the poorest billions on earth. The Pope should mention that if he really has their interests at heart or in mind.

But many symptoms indicate that these effects could get worse if we continue the current patterns of production and consumption.

“Patterns of production and consumption”? This is a euphemism for wealth creation. What is wealth except production and consumption of resources to further human need and desire?

His suggested cure for our dangerous patterns of wealth creation, of course, is good ole demand-side management. Wiser, more enlightened minds (like his, he hopes) will let you know which light bulbs to buy, what sort of car to drive, and which insolvent solar company they’ll “invest” your money in. You can even buy papal indulgences in the form of carbon credits. As the late Alexander Cockburn wrote,

The modern trade is as fantastical as the medieval one. … Devoid of any sustaining scientific basis, carbon trafficking is powered by guilt, credulity, cynicism and greed, just like the old indulgences, though at least the latter produced beautiful monuments.

But the most important thing to realize here is that the “current” patterns of production and consumption are never current. The earthquakes of innovation and gales of creative destruction blow through any such observed patterns. The price system, with its lightning-quick information distribution mechanism is far, far superior to any elites or energy cronies. And technological innovation, though we can’t predict just how, will likely someday take us as far away from today’s energy status quo, just as we have moved away from tallow, whale oil, and horse-drawn carriages.

The Pope disagrees with our rose-tinted techno-optimism, saying “some maintain at all costs the myth of progress and say that the ecological problems will be solved simply by new technical applications.”

The Pope sits on his golden throne and looks over the vast expanse of time and space — from hunter-gatherers running mammoths off cliffs to Americans running Teslas off electric power, from the USA in 1776 and 2015, from England before and after the Industrial Revolution, from Hong Kong and Hiroshima in 1945 to their glorious present — and sneers: progress is a myth, environmental problems can’t be fixed through innovation, production is destroying the earth, consumption is original sin.

Innovation is the wellspring of all progress. Policies to stop or undo innovation in energy, chemistry, industry, farming, and genetics are a way to put humanity in a bell jar, at best. At worst they will put some of us in the dark and others in early graves. They are truly fatal conceits.

And yet, the Pope has faith in policymakers to know just which year we should have gotten off the train of innovation. William F. Buckley famously said conservatives “stand athwart history, yelling ‘Stop!’” Greens are similar, except they’re yelling “Go back!”

Therefore it has become urgent and compelling to develop policies so that in the coming years the emission of carbon dioxide and other highly polluting gases is reduced drastically, for instance by replacing fossil fuels and by developing renewable energy sources.

I reflect again on the notion that this effort might be just another way of the Church embracing and extending a competitor religion. Then again, Pope Francis so often shows that he is a true and faithful green planner. In an unholy alliance with those who see the strategic benefit in absorbing environmentalism, the Holy Father has found the perfect way to restore the power of the Church over politics, economics, culture, and the state to its former glory.


Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.


Daniel Bier

Daniel Bier is the editor of Anything Peaceful. He writes on issues relating to science, civil liberties, and economic freedom.

Against Eco-pessimism: Half a Century of False Bad News by Matt Ridley

Pope Francis’s new encyclical on the environment (Laudato Sii) warns of the coming environmental catastrophe (“unprecedented destruction of ecosystems, with serious consequences for all of us”).  It’s the latest entry in a long literary tradition of environmental doomsday warnings.

In contrast, Matt Ridley, bestselling author of GenomeThe Agile Gene, and The Rational Optimist, who also received the 2012 Julian Simon Memorial Award from the Competitive Enterprise Institute, says this outlook has proven wrong time again. This is the full text of his acceptance speech. Video is embedded below.

It is now 32 years, nearly a third of a century, since Julian Simon nailed his theses to the door of the eco-pessimist church by publishing his famous article in Science magazine: “Resources, Population, Environment: An Oversupply of False Bad News.”

It is also 40 years since The Limits to Growth and 50 years since Silent Spring, plenty long enough to reflect on whether the world has conformed to Malthusian pessimism or Simonian optimism.

Before I go on, I want to remind you just how viciously Simon was attacked for saying that he thought the bad news was being exaggerated and the good news downplayed.

Verbally at least Simon’s treatment was every bit as rough as Martin Luther’s. Simon was called an imbecile, a moron, silly, ignorant, a flat-earther, a member of the far right, a Marxist.

“Could the editors have found someone to review Simon’s manuscript who had to take off his shoes to count to 20?” said Paul Ehrlich.

Erhlich together with John Holdren then launched a blistering critique, accusing Simon of lying about electricity prices having fallen. It turned out they were basing their criticism on a typo in a table, as Simon discovered by calling the table’s author. To which Ehrlich replied: “what scientist would phone the author of a standard source to make sure there were no typos in a series of numbers?”

Answer: one who likes to get his facts right.

Yet for all the invective, his critics have never laid a glove on Julian Simon then or later. I cannot think of a single significant fact, data point or even prediction where he was eventually proved badly wrong. There may be a few trivia that went wrong, but the big things are all right. Read that 1980 article again today and you will see what I mean.

I want to draw a few lessons from Julian Simon’s battle with the Malthusian minotaur, and from my own foolhardy decision to follow in his footsteps – and those of Bjorn Lomborg, Ron Bailey, Indur Goklany, Ian Murray, Myron Ebell and others – into the labyrinth a couple of decades later.

Consider the words of the publisher’s summary of The Limits to Growth: “Will this be the world that your grandchildren will thank you for? A world where industrial production has sunk to zero. Where population has suffered a catastrophic decline. Where the air, sea, and land are polluted beyond redemption. Where civilization is a distant memory. This is the world that the computer forecasts.”

Again and again Simon was right and his critics were wrong.

Would it not be nice if just one of those people who called him names piped up and admitted it? We optimists have won every intellectual argument and yet we have made no difference at all. My daughter’s textbooks trot out the same old Malthusian dirge as mine did.

What makes it so hard to get the message across?

I think it boils down to five adjectives: ahistorical, finite, static, vested and complacent. The eco-pessimist view ignores history, misunderstands finiteness, thinks statically, has a vested interest in doom and is complacent about innovation.

People have very short memories. They are not just ignoring, but unaware of, the poor track record of eco-pessimists. For me, the fact that each of the scares I mentioned above was taken very seriously at the time, attracting the solemn endorsement of the great and the good, should prompt real skepticism about global warming claims today.

That’s what motivated me to start asking to see the actual evidence about climate change. When I did so I could not find one piece of data – as opposed to a model – that shows either unprecedented change or change is that is anywhere close to causing real harm.

Yet when I made this point to a climate scientist recently, he promptly and cheerily said that “the fact that people have been wrong before does not make them wrong this time,” as if this somehow settled the matter for good.

Second, it is enormously hard for people to grasp Simon’s argument that “Incredible as it may seem at first, the term ‘finite’ is not only inappropriate but downright misleading in the context of natural resources.”

He went on: “Because we find new lodes, invent better production methods and discover new substitutes, the ultimate constraint upon our capacity to enjoy unlimited raw materials at acceptable prices is knowledge.” This is a profoundly counterintuitive point.

Yet was there ever a better demonstration of this truth than the shale gas revolution? Shale gas was always there; but what made it a resource, as opposed to not a resource, was knowledge – the practical know-how developed by George Mitchell in Texas. This has transformed the energy picture of the world.

Besides, as I have noted elsewhere, it’s the renewable – infinite – resources that have a habit of running out: whales, white pine forests, buffalo. It’s a startling fact, but no non-renewable resource has yet come close to exhaustion, whereas lots of renewable ones have.

And by the way, have you noticed something about fossil fuels – we are the only creatures that use them. What this means is that when you use oil, coal or gas, you are not competing with other species. When you use timber, or crops or tide, or hydro or even wind, you are.

There is absolutely no doubt that the world’s policy of encouraging the use of bio-energy, whether in the form of timber or ethanol, is bad for wildlife – it competes with wildlife for land, or wood or food.

Imagine a world in which we relied on crops and wood for all our energy and then along comes somebody and says here’s this stuff underground that we can use instead, so we don’t have to steal the biosphere’s lunch.

Imagine no more. That’s precisely what did happen in the industrial revolution.

Third, the Malthusian view is fundamentally static. Julian Simon’s view is fundamentally dynamic. Again and again when I argue with greens I find that they simply do not grasp the reflexive nature of the world, the way in which prices cause the substitution of resources or the dynamic properties of ecosystems – the word equilibrium has no place in ecology.

Take malaria. The eco-pessimists insisted until recently that malaria must get worse in a warming 21st century world. But, as Paul Reiter kept telling them to no avail, this is nonsense. Malaria disappeared from North America, Russia and Europe and retreated dramatically in South America, Asia and Africa in the twentieth century even as the world warmed.

That’s not because the world got less congenial to mosquitoes. It’s because we moved indoors and drained the swamps and used DDT and malaria medications and so on. Human beings are a moving target. They adapt.

But, my fourth point, another reason Simon’s argument fell on stony ground is that so many people had and have a vested interest in doom. Though they hate to admit it, the environmental movement and the scientific community are vigorous, healthy, competitive, cut-throat, free markets in which corporate leviathans compete for donations, grants, subsidies and publicity. The best way of getting all three is to sound the alarm. If it bleeds it leads. Good news is no news.

Imagine how much money you would get if you put out an advert saying: “we now think climate change will be mild and slow, none the less please donate”. The sums concerned are truly staggering. Greenpeace and WWF, the General Motors and Exxon of the green movement, between them raise and spend a billion dollars a year globally. WWF spends $68m alone on educational propaganda. Frankly, Julian, Bjorn, Ron, Indur, Ian, Myron and I are spitting in the wind.

Yet, fifth, ironically, a further problem is complacency. The eco-pessimists are the Panglossians these days, for it is they who think the world will be fine without developing new technologies. Let’s not adopt GM food – let’s stick with pesticides.

Was there ever a more complacent doctrine than the precautionary principle: don’t try anything new until you are sure it is safe? As if the world were perfect. It is we eco-optimists, ironically, who are acutely aware of how miserable this world still is and how much better we could make it – indeed how precariously dependent we are on still inventing ever more new technologies.

I had a good example of this recently debating a climate alarmist. He insisted that the risk from increasing carbon dioxide was acute and that therefore we needed to drastically cut our emissions by 90 percent or so. In vain did I try to point out that drastically cutting emissions by 90% might do more harm to the poor and the rain forest than anything the emissions themselves might do. That we are taking chemotherapy for a cold, putting a tourniquet round our neck to stop a nosebleed.

My old employer, the Economist, is fond of a version of Pascal’s wager – namely that however small the risk of catastrophic climate change, the impact could be so huge that almost any cost is worth bearing to avert it. I have been trying to persuade them that the very same logic applies to emissions reduction.

However small is the risk that emissions reduction will lead to planetary devastation, almost any price is worth paying to prevent that, including the tiny risk that carbon emissions will destabilize the climate. Just look at Haiti to understand that getting rid of fossil fuels is a huge environmental risk.

That’s what I mean by complacency: complacently assuming that we can decarbonize the economy without severe ecological harm, complacently assuming that we can shut down world trade without starving the poor, that we can grow organic crops for seven billion people without destroying the rain forest.

Having paid homage to Julian Simon’s ideas, let me end by disagreeing with him on one thing. At least I think I am disagreeing with him, but I may be wrong.

He made the argument, which was extraordinary and repulsive to me when I first heard it as a young and orthodox eco-pessimist, that the more people in the world, the more invention. That people were brains as well as mouths, solutions as well as problems. Or as somebody once put it: why is the birth of a baby a cause for concern, while the birth of a calf is a cause for hope?

Now there is a version of this argument that – for some peculiar reason – is very popular among academics, namely that the more people there are, the greater the chance that one of them will be a genius, a scientific or technological Messiah.

Occasionally, Julian Simon sounds like he is in this camp. And if he were here today, — and by Zeus, I wish he were – I would try to persuade him that this is not the point, that what counts is not how many people there are but how well they are communicating. I would tell him about the new evidence from Paleolithic Tasmania, from Mesolithic Europe from the Neolithic Pacific, and from the internet today, that it’s trade and exchange that breeds innovation, through the meeting and mating of ideas.

That the lonely inspired genius is a myth, promulgated by Nobel prizes and the patent system. This means that stupid people are just as important as clever ones; that the collective intelligence that gives us incredible improvements in living standards depends on people’s ideas meeting and mating, more than on how many people there are. That’s why a little country like Athens or Genoa or Holland can suddenly lead the world. That’s why mobile telephony and the internet has no inventor, not even Al Gore.

Not surprisingly, academics don’t like this argument. They just can’t get their pointy heads around the idea that ordinary people drive innovation just by exchanging and specializing. I am sure Julian Simon got it, but I feel he was still flirting with the outlier theory instead.

The great human adventure has barely begun. The greenest thing we can do is innovate. The most sustainable thing we can do is change. The only limit is knowledge. Thank you Julian Simon for these insights.

2012 Julian L. Simon Memorial Award Dinner from CEI Video on Vimeo.

Anything Peaceful

Anything Peaceful is FEE’s new online ideas marketplace, hosting original and aggregate content from across the Web.

Create or Die: The World of Don Draper by Jeffrey A. Tucker

I was in an elevator and heard a conversation between a young worker in information technology and an old timer involved in heavy industry. They were talking professions. The young man told the old man that he was in digital marketing. You could feel the sense of incredulity in the small space, and (though he didn’t say it) I just knew what the old man was thinking: “Another fake job in the unsustainable Facebook economy.”

So it has always been. The belief that unless you are making stuff you are not really producing has been with us since the ancient world. Even Aristotle found retailing to be disgusting, and money lending even more so. After all, these people are not actually contributing to the physical store of wealth in society, so in what sense are they creating value? We read similar opinions every day.

Such views completely misconstrue the nature of wealth and the job of enterprise. “The characteristic feature of capitalism that distinguishes it from pre-capitalist methods of production,” writes Ludwig von Mises, “was its new principle of marketing.”

The consumers rule. The makers and sellers of products seek their approval. What influences the decision to buy are the ideas people hold. It thereby becomes incumbent on the sellers to explain, persuade, convince, and inspire. They can only do this with good ideas.

To contemplate the value of an idea, the potentially immense worth of a single product of the human mind, dreamed up from non-existence to the stage of realization, communicated in a way that causes people to change their minds even in the absence of any physical change to the world, is to come to terms with a realm in which matter and spirit meet.

Exploring this realm is where the television series Mad Men (2007-2015) truly excels. It is set in the early 1960s, a time when the modern advertising industry began to take on critical economic importance due to innovations in communications technology. This industry sought to move already-produced goods (and services) from warehouse shelves to become part of people’s lives.

The entire goal of the firm is to bring consumers to a position decision to buy. The right messaging, well placed, can make the difference between a multi-million dollar success and a complete flop. It’s all about entering and influencing the headspace of the consumers — the ultimate decision makers in a capitalist economy.

Time and again, an outsider asks what it is that these advertising executives actually make. They try to explain. They fail. It seems too elusive, or perhaps too fake.

Like any good drama, Mad Men avoids didacticism concerning its point of view. There is plenty of good and evil to go around in the firm and the industry.

But over time, the viewer begins to cheer on the success of these ad men (and women), particularly Don Draper, the series’ main protagonist. You can’t help but sympathize with him on as he struggles to stay in a game in which the rules are always changing and the value systems of the mass of consumers are in constant flux.

For many people, Mad Men has been the first behind-the-scenes exposure to the world of advertising and the capitalistic machinery that manages it. The series is revealing in a historical sense, taking us through the systematic social, political, and economic upheaval of the 1960s. The characters are so well drawn that we actually come to believe we can psychologically deconstruct them one by one.

So let me try to deconstruct Don Draper, at least in a professional sense. He has one skill: creativity. And that creative skill has a test: profitability. In this sense, his creativity is different from a regular artist such as a painter or poet or musician. His one single goal is to generate ideas that sell product. There is a metric to reveal success or failure: It is the balance sheet. If the balance sheet responds, he has succeeded. If it does not, he has failed, and there are dozens of others ready to take his place to try their hands at idea creation.

Where do these marketing ideas come from? They are anything but automatic. If the answer to the question of marketing were obvious, his skills wouldn’t be needed. His job is to discover a message that rearranges the preference scales of possible consumers, which requires discerning the way that a mere physical product can most deeply meet human needs.

When he gets a new client, he extracts the most necessary known data: What is the product and what does it do? To whom is it most likely to be useful? Why would anyone want to obtain it through market exchange, giving up their property for someone else’s?

Once he has processed all known data, he turns his attention to what is unknown. How does this directly benefit people in their daily lives? And how can this product provide an even deeper benefit by causing life to better than it has been thus far?

The people who hire him do not know the answer. Draper does not know the answer either. He has to generate that answer from within his creative capacity. He will be tested on whether he gets it right, which is why he needs time to reflect. The answer for one product is not the same as another. Each case is unique. As he finds the best possible strategy, he also knows that there is no faking it: He either gets it right or he gets it wrong.

Every day, he faces this struggle to discover, see, codify, and pitch — to sell his idea to sell a product. He goes to bed each night with a profound sense of uncertainty. The answer is elusive. He looks through the glass darkly. Beyond the horizon of the present is the abyss of the future.

To cross it, he has to put himself in the shoes of countless people who know nothing about the product, peer into their hearts and souls, discern the inner workings of their minds, connect the results with a product, map out a memorable message, strategize on the right paths for conveying that message, and explain it all in a way that persuades those who have hired him that he is right and becoming willing to take the risk.

When you consider the whole of the responsibility here, it is awesome. Most people can’t live in this constant state of not knowing today what is essential to know tomorrow. But Draper has learned to have confidence that his knowledge will be greater tomorrow than it is today. He has learned to put his faith and trust in an emergent process that operates within his mind.

Notice that there are two levels of challenge here, both within and without.

Externally, he must put himself in the mindset of a random and unknown consumer, potentially millions of them. He must be outward looking, one might even say public spirited. He has to discern the workings of the human spirit.

Internally, the challenge is just as great. He has this gray matter that has to generate something fresh, wonderful, and effective. He has to believe that the answer is in there somewhere; it just needs the right configuration of outside stimuli and careful reflection to shake it loose.

He must manage his life to maximize the chances that these external and internal forces will come together to reveal the answer he is seeking. He lives an edgy and sometimes horrible life. Why does he seem to disappoint and betray so many people? Why does he so often disappoint us with his antics, his insensitivities, his erratic wanderings? How can he appear to have such intense convictions in one setting and then blow them up again in a different setting later in the same day?

In the course of his life, Draper is cultivating his capacity for thinking and creating in the best way he knows how. The ideas have to emerge: where they come from and how they rise from the recesses of his brain is not completely known to him.

But this much he knows: living a static and ritualized existence does not do it. He must at all times be ready to destroy a previous mode of thought, no matter what the costs that went into making it, and replace it with something completely new. In order to disrupt his own staid patterns of thought and open new ways of thinking, he seeks out change with new stimuli, risk, and even danger.

It’s the way the truly creative mind works — not through repeating what is known but by progressively discovering what has been unknown. In this task, past data is useful and interesting but also potentially distracting and even completely irrelevant in a world of ceaseless change. A plan based on known metrics alone is a recipe for total failure. The real source of value comes from understanding, anticipating, and acting on what is next. Even more value comes from actually creating what is next.

This method is not only Don’s own. It is also the source of progress in our world. Our lives are strictly divided into three experience of time: the unchangeable data of the past, the tactile experiences of the present, and the darkened and mapless path of the future. The forward motion of time, from past to present to future, never stops. The job of the advertiser — or the creative artist or the entrepreneur or the manager of any firm — is to find the light switches that illuminate the best route to leave the unchangeable past, improve the unsatisfactory present, and pave the way to a more wonderful world of tomorrow.

Don Draper is seeking those switches. When he finds one, there is a moment of rejoicing but then the reality dawns. He must find another. Then another. He must create or die. And so it must be for as long as he pursues his career.

Draper is a very flawed figure. So are we all. So will always be the ideas and structures and institutions created by mortal beings. His drive to succeed seems to come at the expense of his own soul. His obsession with knowing the minds of others displaces the need to be honest with himself.

All of this is true, and well portrayed. But consider what is being criticized here. The problem with Draper, we are being told, is that he gives of himself too much. And perhaps he does. It’s a struggle everyone faces, no matter the institutional and professional setting.

Still, even given his flaws, we should not fail to observe the piety at work here. It is because of the daring and courageous will to think something new, to seek out the workings of the public mind, to live day-to-day with radical uncertainty, to dive into the crucible of profit-and-loss that we move ever further from the state of nature toward the promise and possibility of a flourishing society of prosperity and peace.

The advertisers and marketers are seeking to have a role in creating that future. Think of how you spend your time. Think of how you spend your money. There are infinite choices before us, but at any one moment, you can only do one thing at a time.

Very often, as you reflect on your life and what you do, you will find that the goods and services that capture your attention, have behind them a massive apparatus of genius, risk, and creativity, all constructed to convey an idea.

That is called marketing. It is devised by human beings, portrayed so beautifully, flaws and all, on Mad Men.

In a market economy, geniuses are gathered to care about the life and decisions of the common person. They regard us as valuable. This is a wonderful thing. We should be grateful for it. It all happens because of an idea — an idea that begins in one mind that can eventually teach the world to sing.

Is there value in that? Absolutely — even if it can’t be explained in an elevator pitch.


Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.

EDITORS NOTE: This column originally appeared in Anything Peaceful.

AMC’s “Halt and Catch Fire” Is Capitalism’s Finest Hour by Keith Farrell

AMC’s Halt and Catch Fire is a brilliant achievement. The show is a vibrant look at the emerging personal computer industry in the early 1980s. But more than that, the show is about capitalism, creative destruction, and innovation.

While we all know the PC industry changed the world, the visionaries and creators who brought us into the information age faced uncertainty over what their efforts would yield. They risked everything to build new machines and to create shaky start-ups. Often they failed and lost all they had.

HCF has four main characters: Joe, a visionary and salesman; Cameron, an eccentric programming geek; Gordon, a misunderstood engineering genius; and Gordon’s wife, Donna, a brilliant but unappreciated housewife and engineer.

The show pits programmers, hardware engineers, investors, big businesses, corporate lawyers, venture capitalists, and competing start-ups against each other and, at times, shows them having to cooperate to overcome mutual problems. The result is innovation.

Lee Pace gives an award-worthy performance as Joe MacMillan. The son of a never-present IBM tycoon and a negligent, drug addicted mother, Joe struggles with a host of mental and emotional problems. He’s a man with a brilliant mind and an amazing vision — but he has no computer knowledge or capabilities.

The series begins with his leaving a sales job at IBM in the hope of hijacking Cardiff Electric, a small Texas-based computer company, and launching it into the personal computing game.

As part of his scheme, he gets a low-level job at Cardiff where he recruits Gordon Clark, played by the equally talented Scoot McNairy. Enamored with Gordon’s prior writings on the potential for widespread personal computer use, Joe pleads with Gordon to reverse engineer an IBM-PC with him. The plot delves into the ethical ambiguities of intellectual property law as the two spend days reverse engineering the IBM BIOS.

While the show is fiction, it is inspired in part by the real-life events of Rod Canion, co-founder of Compaq. His book, Open: How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing serves as a basis for many of the events in the show’s first season.

In 1981, when Canion and his cohorts set out to make a portable PC, the market was dominated by IBM. Because IBM had rushed their IBM-PC to market, the system was made up entirely of off-the-shelf components and other companies’ software.

As a result, it was possible to buy those same components and software and build what was known as an IBM “clone.” But these clones were only mostlycompatible with IBM. While they could run DOS, they may or may not have run other programs written for IBM-PCs.

Because IBM dominated the market, all the best software was being written for IBMs. Canion wanted to build a computer that was 100 percent IBM compatible but cheaper — and portable enough to move from desk to desk.

Canion said in an interview on the Internet History Podcast, “We didn’t want to copy their computer! We wanted to have access to the software that was written for their computer by other people.”

But in order to do that, he and his team had to reverse-engineer the IBM BIOS. They couldn’t just steal or copy the code because it was proprietary technology, but they could figure out what function the code executed and then write their own code to handle the same task.

Canion explains:

What our lawyers told us was that not only can you not use [the copyrighted code], anybody that’s even looked at it — glanced at it — could taint the whole project. … We had two software people. One guy read the code and generated the functional specifications.

So it was like reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function.

In Halt and Catch Fire, Joe uses this idea to push Cardiff into making their own PC by intentionally leaking to IBM that he and Gordon had indeed reversed engineered the BIOS. They recruit a young punk-rock programmer named Cameron Howe to write their own BIOS.

While Gordon, Cameron, and Joe all believe that they are the central piece of the plan, the truth is that they all need each other. They also need to get the bosses and investors at Cardiff on their side in order to succeed, which is hard to do after infuriating them. The show demonstrates that for an enterprise to succeed you need to have cooperation between people of varying skill sets and knowledge bases — and between capital and labor.

The series is an exploration of the chaos and creative destruction that goes into the process of innovation. The beginning of the first episode explains the show’s title:

HALT AND CATCH FIRE (HCF): An early computer command that sent the machine into a race condition, forcing all instructions to compete for superiority at once. Control of the computer could be regained.

The show takes this theme of racing for superiority to several levels: the characters, the industry, and finally the economy and the world as a whole.

As Gordon himself declares of the cut-throat environment in which computer innovation occurs, “It’s capitalism at its finest!” HFC depicts Randian heroes: businessmen, entrepreneurs, and creators fight against all odds in a race to change the world.

Now into its second season, the show is exploring the beginnings of the internet, and Cameron is running her own start-up company, Mutiny. I could go on about the outstanding production quality, but the real novelty here is a show where capitalists, entrepreneurs, and titans of industry are regarded as heroic.

Halt and Catch Fire is a brilliant show, but it isn’t wildly popular. I fear it may soon be canceled, so be sure to check it out while it’s still around.


Keith Farrell

Keith Farrell is a freelance writer and political commentator.

Bed Bugs Are the New Plague by Jeffrey A. Tucker

It must have been pretty rotten to sleep in, say, the 12th century Europe. Your floor was dirt. Your mattress was made from hay or bean husks. The biggest drag of all must have been the bed bug problem. It’s not so fabulous to lie there asleep while thousands of ghastly critters gnaw on your flesh. You wake with rashes all over your body.

They heal gradually in the course of the day, but, at night, it starts all over again.

No, they don’t kill you. But they surely make life desperate and miserable. They know where you are. They sense the carbon dioxide. They are after your blood, so they can stay alive. No wonder some people have been driven to suicide.

It stands to reason that among the earliest priorities of civilized life was the total eradication of bed bugs. And we did it! Thanks to modern pesticides, most especially DDT, generations knew not the bed bug.

That is, at least in capitalist countries. I have a friend from Russia whose mother explained the difference between capitalism and socialism as summed up in bed bugs. In the 1950s, capitalist countries had eliminated them. The socialist world, by contrast, faced an epidemic.

But you know what? They are back with an amazing ferocity, right here in 21st century America.

There is a new book getting rave reviews and high sales: Infested: How the Bed Bug Infiltrated Our Bedrooms and Took Over the World.

You can attend Bed Bug University, which is “an intensive four day course that covers bed bug biology and behavior, treatment protocols and explores the unique legal challenges and business opportunities of bed bugs.”

You can browse the Bed Bug Registry, with dozens of reports coming in from around the country. You can call a local company that specializes in keeping them at bay.

Welcome to the post-DDT world in which fear of pesticides displaced fear of the thing that pesticides took away. Oh, how glorious it is to embrace nature and all its ways — until nature begins to feed on you in your sleep.

The various restrictions and bans from the 1970s have gradually brought back the nightmares that wonderful, effective, killer chemicals took away. Some people claim that today not even DDT works because the new strain of bed bug is stronger than ever.

Forget innovating with new pesticides: the restrictions are just too tight. There is not a single product at your local big box hardware store that can deal with these blood suckers. And the products that more-or-less work that are available online, such as Malathion, are not approved for indoor use — and I know for sure that everyone obeys such rules!

In our current greeny ethos, people are suggesting “natural” methods such as: “take all of your laundry and bedding to the Laundromat and wash and dry it at high temperatures.”

Why not do it at home? Well, thanks to federal regulations, your hot water heater is shipped with a high temperature of 110 degrees, which is something like a luxurious bath for the bed bug. Add your detergent — which, by government decree, no longer has phosphates — and your wash turns into Mr. Bubble happy time for Mr. Bed Bug.

So you could stand over gigantic pots of boiling water in your kitchen, fishing beddings in and out, beating your mattresses outside with sticks, and otherwise sleeping in plastic bags, like they do in the new season of “Orange Is the New Black.” You know, like in prison. Or like in the 12th century.

No matter how modernized we become, no matter how many smartphones and tablets we acquire, we still have to deal with the whole problem of nature trying to eat us — in particular, its most wicked part, the man-eating insect. There is no app for that.

Google around on how many people die from mosquitos, and you are immediately struck by the ghastly reality: These things are even more deadly than government. And that’s really saying something.

But somehow, starting in the late 1960s, we began to forget this. Capitalism achieved a wonderful thing, and we took it for granted. We banned the chemicals that saved us, and gradually came to prohibit the creation of more. We feared a “Silent Spring” but instead created a nation in which the noise we hear at night is of an army of bugs sinking their teeth into our flesh.

A little silence would be welcome.

So here we are: mystified, afraid to lie down and sleep, afraid to buy a sofa from Craigslist, boiling our sheets, living in fear of things we can’t see. It’s the Dark Ages again. It gets worse each year, especially during summer when the bed bugs leave their winter hibernation and gather en masse to become our true and living nightmare.

How bad does it have to get before we again unleash the creative forces of science and capitalism to restore a world that is livable for human beings?


Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.

How Ice Cream Won the Cold War by B.K. Marcus

Richard Nixon stood by a lemon-yellow refrigerator in Moscow and bragged to the Soviet leader: “The American system,” he told Nikita Khrushchev over frosted cupcakes and chocolate layer cake, “is designed to take advantage of new inventions.”

It was the opening day of the American National Exhibition at Sokol’niki Park, and Nixon was representing not just the US government but also the latest products from General Mills, Whirlpool, and General Electric. Assisting him in what would come to be known as the “Kitchen Debates” were attractive American spokesmodels who demonstrated for the Russian crowd the best that capitalism in 1959 had to offer.

Capitalist lifestyle

“This was the first time,” writes British food historian Bee Wilson of the summer exhibition, that “many Russians had encountered the American lifestyle firsthand: the first time they … set eyes on big American refrigerators.”

Laughing and sometimes jabbing fingers at one another, the two men debated the merits of capitalism and communism. Which country had the more advanced technologies? Which way of life was better? The conversation … hinged not on weapons or the space race but on washing machines and kitchen gadgets. (Consider the Fork)

Khrushchev was dismissive. Yes, the Americans had brought some fancy machines with them, but did all this consumer technology actually offer any real advantages?

In his memoirs, he later recalled picking up an automatic lemon squeezer. “What a silly thing … Mr. Nixon! … I think it would take a housewife longer to use this gadget than it would for her to … slice a piece of lemon, drop it into a glass of tea, then squeeze a few drops.”

Producing necessities

That same year, Khrushchev announced that the Soviet economy would overtake the United States in the production of milk, meat, and butter. These were products that made sense to him. He couldn’t deliver — although Soviet farmers were forced to slaughter their breeding herds in an attempt to do so — but the goal itself reveals what the communist leader believed a healthy economy was supposed to do: produce staples like meat and dairy, not luxuries like colorful kitchenware and complex gadgetry for the decadent and lazy.

“Don’t you have a machine,” he asked Nixon, “that puts food in the mouth and presses it down? Many things you’ve shown us are interesting but they are not needed in life. They have no useful purpose. They are merely gadgets.”

Khrushchev was displaying the behavior Ludwig von Mises described in The Anti-Capitalistic Mentality. “They castigate the luxury, the stupidity and the moral corruption of the exploiting classes,” Mises wrote of the socialists. “In their eyes everything that is bad and ridiculous is bourgeois, and everything that is good and sublime is proletarian.”

On display that summer in Moscow was American consumer tech at its most bourgeois. The problem with “castigating the luxury,” as Mises pointed out, is that all “innovation is first a luxury of only a few people, until by degrees it comes into the reach of the many.”

Producing luxuries

It is appropriate that the Kitchen Debate over luxury versus necessity took place among high-end American refrigerators. Refrigeration, as a luxury, is ancient. “There were ice harvests in China before the first millennium BC,” writes Wilson. “Snow was sold in Athens beginning in the fifth century BC. Aristocrats of the seventeenth century spooned desserts from ice bowls, drank wine chilled with snow, and even ate iced creams and water ices. Yet it was only in the nineteenth century in the United States that ice became an industrial commodity.” Only with modern capitalism, in other words, does the luxury reach so rapidly beyond a tiny elite.

“Capitalism,” Mises wrote in Economic Freedom and Interventionism, “is essentially mass production for the satisfaction of the wants of the masses.”

The man responsible for bringing ice to the overheated multitude was a Boston businessman named Frederic Tudor. “History now knows him as ‘the Ice King,’” Steven Johnson writes of Tudor in How We Got to Now: Six Innovations That Made the Modern World, “but for most of his early adulthood he was an abject failure, albeit one with remarkable tenacity.”

Like many wealthy families in northern climes, the Tudors stored blocks of frozen lake water in icehouses, two-hundred-pound ice cubes that would remain marvelously unmelted until the hot summer months arrived, and a new ritual began: chipping off slices from the blocks to freshen drinks [and] make ice cream.

In 1800, when Frederic was 17, he accompanied his ill older brother to Cuba. They were hoping the tropical climate would improve his brother’s health, but it “had the opposite effect: arriving in Havana, the Tudor brothers were quickly overwhelmed by the muggy weather.” They reversed course, but the summer heat chased them back to the American South, and Frederic longed for the cooler climes of New England. That experience “suggested a radical — some would say preposterous — idea to young Frederic Tudor: if he could somehow transport ice from the frozen north to the West Indies, there would be an immense market for it.”

“In a country where at some seasons of the year the heat is almost unsupportable,” Tudor wrote in his journal, “ice must be considered as outdoing most other luxuries.”

Tudor’s folly

Imagine what an early 19th-century version of Khrushchev would have said to the future Ice King. People throughout the world go hungry, and you, Mr. Tudor, want to introduce frozen desserts to the tropics? What of beef? What of butter? The capitalists chase profits rather than producing the necessities.

It’s true that Tudor was pursuing profits, but his idea of ice outdoing “most other luxuries” looked to his contemporaries more like chasing folly than fortune.

The Boston Gazette reported on one of his first shiploads of New England ice: “No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.”

And at first the skeptics seemed right. Tudor “did manage to make some ice cream,” Johnson tells us. And that impressed a few of the locals. “But the trip was ultimately a complete failure.” The novelty of imported ice was just too novel. Why supply ice where there was simply no demand?

You can’t put a price on failure

In the early 20th century, economists Ludwig von Mises and F.A. Hayek, after years of debate with the Marxists, finally began to convince advocates of socialist central planning that market prices were essential to the rational allocation of scarce resources. Some socialist theorists responded with the idea of using capitalist market prices as a starting point for the central planners, who could then simulate the process of bidding for goods, thereby replacing real markets with an imitation that they believed would be just as good. Capitalism would then be obsolete, an unfortunate stage in the development of greater social justice.

By 1959, Khrushchev could claim, however questionably, that Soviet refrigerators were just as good as the American variety — except for a few frivolous features. But there wouldn’t have been any Soviet fridges at all if America hadn’t led the way in artificial refrigeration, starting with Tudor’s folly a century and a half earlier. If the central planners had been around in 1806 when the Boston Gazette poked fun at Tudor’s slippery speculation, what prices would they have used as the starting point for future innovation? All the smart money was in other ventures, and Tudor was on his way to losing his family’s fortune and landing in debtor’s prison.

Only through stubborn persistence did Tudor refine his idea and continue to innovate while demand slowly grew for what he had to offer.

“Still pursued by his creditors,” Johnson writes, Tudor

began making regular shipments to a state-of-the-art icehouse he had built in Havana, where an appetite for ice cream had been slowly maturing. Fifteen years after his original hunch, Tudor’s ice trade had finally turned a profit. By the 1820s, he had icehouses packed with frozen New England water all over the American South. By the 1830s, his ships were sailing to Rio and Bombay. (India would ultimately prove to be his most lucrative market.)

The world the Ice King made

In the winter of 1846–47, Henry David Thoreau watched a crew of Tudor’s ice cutters at work on Walden Pond.

Thoreau wrote, “The sweltering inhabitants of Charleston and New Orleans, of Madras and Bombay and Calcutta, drink at my well.… The pure Walden water is mingled with the sacred water of the Ganges.”

When Tudor died in 1864, Johnson tells us, he “had amassed a fortune worth more than $200 million in today’s dollars.”

The Ice King had also changed the fortunes of all Americans, and reshaped the country in the process. Khrushchev would later care about butter and beef, but before refrigerated train cars — originally cooled by natural ice — it didn’t matter how much meat and dairy an area could produce if it could only be consumed locally without spoiling. And only with the advent of the home icebox could families keep such products fresh. Artificial refrigeration created the modern city by allowing distant farms to feed the growing urban populations.

A hundred years after the Boston Gazette reported what turned out to be Tudor’s failed speculation, the New York Times would run a very different headline: “Ice Up to 40 Cents and a Famine in Sight”:

Not in sixteen years has New York faced such an iceless prospect as this year. In 1890 there was a great deal of trouble and the whole country had to be scoured for ice. Since then, however, the needs for ice have grown vastly, and a famine is a much more serious matter now than it was then.

“In less than a century,” Johnson observes, “ice had gone from a curiosity to a luxury to a necessity.”

The world that luxury made

Before modern markets, Mises tells us, the delay between luxury and necessity could take centuries, but “from its beginnings, capitalism displayed the tendency to shorten this time lag and finally to eliminate it almost entirely. This is not a merely accidental feature of capitalistic production; it is inherent in its very nature.” That’s why everyone today carries a smartphone — and in a couple of years, almost every wrist will bear a smartwatch.

The Cold War is over, and Khrushchev is no longer around to scoff, but the Kitchen Debate continues as the most visible commercial innovations produce “mere gadgets.” Less visible is the steady progress in the necessities, including the innovations we didn’t know were necessary because we weren’t imagining the future they would bring about. Even less evident are all the failures. We talk of profits, but losses drive innovation forward, too.

It’s easy to admire the advances that so clearly improve lives: ever lower infant mortality, ever greater nutrition, fewer dying from deadly diseases. It’s harder to see that the larger system of innovation is built on the quest for comfort, for entertainment, for what often looks like decadence. But the long view reveals that an innovator’s immediate goals don’t matter as much as the system that promotes innovation in the first place.

Even if we give Khrushchev the benefit of the doubt and assume that he really did care about feeding the masses and satisfying the most basic human needs, it’s clear the Soviet premier had no idea how economic development works. Progress is not driven by producing ever more butter; it is driven by ice cream.


B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

A Shrine to a Socialist Demagogue by Lawrence W. Reed

MANAGUA, Nicaragua — It’s May 27, 2015. Driving south on First Avenue toward Masaya on a hot, late-spring day in the Nicaraguan capital, my eye caught an image in the distance. “That looks like Curly from The Three Stooges!” I thought. Nah, what would he be doing here? Nyuk. Nyuk.

As we approached, I suddenly realized it only resembled Curly. It was actually somebody considerably less funny. The statue was a garish, tasteless manifestation of the late Venezuelan socialist strongman Hugo Chavez, surrounded by ugly, orange curlicues. I repressed the urge to gag as I stopped to take this photo:

Hugo Chavez shrine

This tribute to a man whose ceaseless demagoguery ruined his nation’s economy is the doing, of course, of Nicaraguan president Daniel Ortega and his party. Ortega, like Chavez, engineered constitutional changes that may make him effectively president for life. He has worshiped state power since the 1970s. He was a Cuban-trained Marxist and cofounder of the Frente Sandinista de Liberación Nacional, the Sandinistas. I visited the country five times in the 1980s to interview key political figures, and whenever I was there, Ortega was pushing government literacy programs; meanwhile, his government was harassing and shutting down the opposition press.

Back in the 1980s, Ortega relied heavily on subsidies from his Soviet and Cuban sponsors. But now that the Soviets are ancient history and the Cuban economy is on life support, he’s had to moderate. Nicaragua is a very poor country. Its per capita GDP is about a third of the world average, better than Yemen’s but not as deluxe as Uzbekistan’s. According to the 2015 Index of Economic Freedom, however, it’s ranked better than you might expect at 108th in the world. Seventy countries are actually less free.

Who do you think is ranked at the very bottom, at 176, 177, and 178?

None other than the workers’ paradises of Venezuela, Cuba, and North Korea.

If you want a glimpse of the current state of the Chavez/Maduro experiment in Venezuelan socialism, look no further than the relative scarcities of toilet paper (you’d better bring your own if you visit) and paper money (more abundant than ever at 510 percent inflation).

I asked my old friend Deroy Murdock, senior fellow with the Atlas Network, Fox News contributor, and keen observer of affairs in the Americas: How would you assess the legacy of the Venezuelan caudillo memorialized by Ortega’s regime in Nicaragua?

“Hugo Chavez arrived in Venezuela, determined to make his country a gleaming showcase of socialism, and renovate Cuba in the process,” Murdock said. “Now, Chavez is dead, Castro still lives, and both countries remain in dire straits. Chavez’s legacy is the enduring lesson that big government is bad, and huge government is even worse.”

Indeed. Seems pretty self-evident whether you look at the numbers from afar or walk the streets in person. Venezuela’s economy has been in free-fall for almost all of the past 15 years.

But there I was, gazing at a giant Hugo in Managua, a monument intended to say, “Way to go, man!” One wonders where an impoverished country gets the money or even the idea to construct such a hideous gargoyle.

Then I realized the answer: Ortega’s Nicaragua is run by socialists. And by typical socialist reasoning, you can be an architect of disaster but reckoned to be a “man of the people” just by claiming to be one.

If you produced the same results while advocating capitalism, you’d be reckoned a monster.


Lawrence W. Reed

Lawrence W. (“Larry”) Reed became president of FEE in 2008 after serving as chairman of its board of trustees in the 1990s and both writing and speaking for FEE since the late 1970s.

A Simple Question for Minimum Wage Advocates by Donald J. Boudreaux

I will return in a later post to the topic of my previous post, namely, the validity or (as I see it) invalidity of the argument that proposes a tolerance of locally set minimum-wage rates if not of nationally or super-nationally set rates.

I state, however, here and again my conclusion: Legislating minimum wages – that is, enacting a policy of caging people who insist on entering voluntarily into employment contracts on terms that political elites find objectionable – is no more attractive or justified or likely to succeed at helping low-skilled workers if the particular caging policy in question is enacted locally than if it is enacted nationally or globally.

In this short post, I ask a simple question of all advocates of minimum wages:

If enforcement of minimum-wage policies were carried out in practice by policing low-skilled workers rather than employers – if these policies were enforced by police officers monitoring workers and fining those workers who agreed to work at hourly wages below the legislated minimum – would you still support minimum wages?

Would you be good with police officers arresting those workers who, preferring to remain employed at sub-minimum wages rather than risk losing their current jobs (or risking having do endure worsened employment conditions), refuse to abide by the wage terms dictated by the legislature?

Would you think it an acceptable price to pay for your minimum-wage policy that armed police officers confine in cages low-skilled workers whose only offense is their persistence at taking jobs at wages below those dictated by the government?

If a minimum-wage policy is both economically justified and morally acceptable, you should have no problem with this manner of enforcement.

(You might still prefer, for obviously aesthetic reasons, enforcement leveled mainly at employers. But if the policy is to unleash government force to raise wages above those that would be otherwise agreed to on the market voluntarily between employers and workers, then you should agree that, if for some reason enforcement aimed at employers were impossible or too costly, enforcement aimed at workers is morally and economically acceptable.)

If, however, you do have a problem with minimum-wage regulations being enforced by targeting workers who violate the legislature’s dictated wage terms, then you might wish to think a bit more realistically and deeply about just what it is you advocate in the name of economic improvement or “social justice.”

This post first appeared at Cafe Hayek, where Don Boudreaux blogs with Russ Roberts.

Donald Boudreaux

Donald Boudreaux is a professor of economics at George Mason University, a former FEE president, and the author of Hypocrites and Half-Wits.

Microaggressions and Microwonders: Are mountains out of molehills proof the world’s getting better? by Steven Horwitz

A recurring theme of recent human history is that the less of something bad we see in the world around us, the more outrage we generate about the remaining bits.

For example, in the 19th century, outrage about child labor grew as the frequency of child labor was shrinking. Economic forces, not legislation, had raised adult wages to a level at which more and more families did not need additional income from children to survive, and children gradually withdrew from the labor force. As more families enjoyed having their children at home or in school longer, they became less tolerant of those families whose situations did not allow them that luxury, and the result was the various moral crusades, and then laws, against child labor.

We have seen the same process at work with cigarette smoking in the United States. As smoking has declined over the last generation or two, we have become ever less tolerant of those who continue to smoke. Today, that outrage continues in the form of new laws against vaping and e-cigarettes.

The ongoing debate over “rape culture” is another manifestation of this phenomenon. During the time that reasonably reliable statistics on rape in the United States have been collected, rape has never been less frequent than it is now, and it is certainly not as institutionalized as a practice in the Western world as it was in the past. Yet despite this decline — or in fact because of it — our outrage at the rape that remains has never been higher.

The talk of the problem of “microaggressions” seems to follow this same pattern. The term refers to the variety of verbal and nonverbal forms of communication that are said to constitute disrespect for particular groups, especially those who have been historically marginalized. So, for example, the use of exclusively masculine pronouns might be construed as a “microaggression” against women, or saying “ladies and gentlemen” might be seen as a microaggression against transsexuals. The way men take up more physical space on a train or bus, or the use of the phrase “walk-only zones” (which might offend the wheelchair-bound) to describe pedestrian crossways, are other examples.

Those who see themselves as the targets of microaggressions have often become very effective entrepreneurs of outrage in trying to parlay these perceived slights into indications of much more pervasive problems of sexism or racism and the like. Though each microaggression individually might not seem like much, they add up. So goes the argument.

I don’t want to totally dismiss the underlying point here, as it is certainly true that people say and do things (often unintentionally) that others will find demeaning, but I do want to note how this cultural phenomenon fits the pattern identified above. We live in a society in which the races and genders (and classes!) have never been more equal. Really profound racism and sexism is far less prominent today than it was 50 or 100 years ago. In a country where the president is a man of color and where one of our richest entertainers is a woman of color, it’s hard to argue that there hasn’t been significant progress.

But it is exactly that progress that leads to the outrage over microaggressions. Having steadily pushed back the more overt and damaging forms of inequality, and having stigmatized them as morally offensive, we have less tolerance for the smaller bits that remain. As a result, we take small behaviors that are often completely unintended as offenses and attempt to magnify them into the moral equivalent of past racism or sexism. Even the co-opting of the word “aggression” to describe what is, in almost all cases, behavior that is completely lacking in actual aggression is an attempt to magnify the moral significance of those behaviors.

Even if we admit that some of such behaviors may well reflect various forms of animus, there are two problems with the focus on microaggressions.

First, where do we draw the line? Once these sorts of behaviors are seen as slights with the moral weight of racism or sexism, we can expect to see anyone and everyone who feels slighted about anything someone else said or did declare it a “microaggression” and thereby try to capture the same moral high ground.

We are seeing this already, especially on college campuses, where even the mere discussion of controversial ideas that might make some groups uncomfortable is being declared to be a microaggression. In some cases this situation is leading faculty to stop teaching anything beyond the bland.

Second, moral equivalence arguments can easily backfire. For example, if we, as some feminists were trying to do in the 1980s, treat pornography as the equivalent of rape, hoping to make porn look worse, we might end up causing people to treat real physical rape less seriously given that they think porn is largely harmless.

So it goes with microaggressions: if we try to raise men taking up too much room on a bus seat into a serious example of sexism, then we risk people reacting by saying, “Well, if that’s what sexism is, then why should I really worry too much about sexism?” The danger is that when far more troubling examples of sexism or racism appear (for example, the incarceration rates of African-American men), we might be inclined to treat them less seriously.

It is tempting to want to flip the script on the entrepreneurs of microaggression outrages and start to celebrate their outrages as evidence of how far we’ve come. If men who take the middle armrest on airplanes (as obnoxious as that might be) are a major example of gender inequality, we have come far indeed. But as real examples of sexism and racism and the like do still exist, I’d prefer another strategy to respond to the talk of microaggressions.

Let’s spend more time celebrating the “microwonders” of the modern world. Just as microaggression talk magnifies the small pockets of inequality left and seems to forget the larger story of social progress, so does our focus on large social and economic problems in general cause us to forget the larger story of progress that is often manifested in tiny ways.

We live in the future that prior generations only imagined. We have the libraries of the world in our pockets. We have ways of easily connecting with friends and strangers across the world. We can have goods and even services of higher quality and lower cost, often tailored to our particular desires, delivered to our door with a few clicks of a button. We have medical advances that make our lives better in all kinds of small ways. We have access to a variety of food year-round that no king in history had. The Internet brings us happiness every day through the ability to watch numerous moments of humor, human triumph, and joy.

Even as we recognize that the focus on microaggressions means we have not yet eliminated every last trace of inequality, we should also recognize that it means we’ve come very far. And we should not hesitate to celebrate the microwonders of progress that often get overlooked in our laudable desire to continue to repair an imperfect world.

Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

Capitalism Defused the Population Bomb by Chelsea German

Journalists know that alarmism attracts readers. An article in the British newspaper the Independent titled, “Have we reached ‘peak food’? Shortages loom as global production rates slow” claimed humanity will soon face mass starvation.

Just as Paul Ehrlich’s 1968 bestseller The Population Bomb  predicted that millions would die due to food shortages in the 1970s and 1980s, the article in 2015 tries to capture readers’ interest through unfounded fear. Let’s take a look at the actual state of global food production.

The alarmists cite statistics showing that while we continue to produce more and more food every year, the rate of acceleration is slowing down slightly. The article then presumes that if the rate of food production growth slows, then widespread starvation is inevitable.

This is misleading. Let us take a look at the global trend in net food production, per person, measured in 2004-2006 international dollars. Here you can see that even taking population growth into account, food production per person is actually increasing:

Food is becoming cheaper, too. As K.O. Fuglie and S. L. Wang showed in their 2012 article “New Evidence Points to Robust but Uneven Productivity Growth in Global Agriculture,” food prices have been declining for over a century, in spite of a recent uptick:

In fact, people are better nourished today than they ever have been, even in poor countries. Consider how caloric consumption in India increased despite population growth:

Given that food is more plentiful than ever, what perpetuates the mistaken idea that mass hunger is looming? The failure to realize that human innovation, through advancing technology and the free market, will continue to rise to meet the challenges of growing food demand.

In the words of HumanProgress.org Advisory Board member Matt Ridley, “If 6.7 billion people continue to keep specializing and exchanging and innovating, there’s no reason at all why we can’t overcome whatever problems face us.”

This idea first appeared at Cato.org.

Health Insurance Is Illegal by Warren C. Gibson

Health insurance is a crime. No, I’m not using a metaphor. I’m not saying it’s a mess, though it certainly is that. I’m saying it’s illegal to offer real health insurance in America. To see why, we need to understand what real insurance is and differentiate that from what we currently have.

Real insurance

Life is risky. When we pool our risks with others through insurance policies, we reduce the financial impact of unforeseen accidents or illness or premature death in return for a premium we willingly pay. I don’t regret the money I’ve spent on auto insurance during my first 55 years of driving, even though I’ve yet to file a claim.

Insurance originated among affinity groups such as churches or labor unions, but now most insurance is provided by large firms with economies of scale, some organized for profit and some not. Through trial and error, these companies have learned to reduce the problems of adverse selection and moral hazard to manageable levels.

A key word above is unforeseen.

If some circumstance is known, it’s not a risk and therefore cannot be the subject of genuine risk-pooling insurance. That’s why, prior to Obamacare, some insurance companies insisted that applicants share information about their physical condition. Those with preexisting conditions were turned down, invited to high-risk pools, or offered policies with higher premiums and higher deductibles.

Insurers are now forbidden to reject applicants due to preexisting conditions or to charge them higher rates.

They are also forbidden from charging different rates due to different health conditions — and from offering plans that exclude certain coverage items, many of which are not “unforeseen.”

In other words, it’s illegal to offer real health insurance.

Word games

Is all this just semantics? Not at all. What currently passes for health insurance in America is really just prepaid health care — on a kind of all-you-can-consume buffet card. The system is a series of cost-shifting schemes stitched together by various special interests. There is no price transparency. The resulting overconsumption makes premiums skyrocket, and health resources get misallocated relative to genuine wants and needs.

Lessons

Some lessons here are that genuine health insurance would offer enormous cost savings to ordinary people — and genuine benefits to policyholders. These plans would encourage thrift and consumer wisdom in health care planning,  while discouraging the overconsumption that makes prepaid health care unaffordable.

At this point, critics will object that private health insurance is a market failure because the refusal of unregulated private companies to insure preexisting conditions is a serious problem that can only be remedied by government coercion. The trouble with such claims is that no one knows what a real health insurance market would generate, particularly as the pre-Obamacare regime wasn’t anything close to being free.

What might a real, free-market health plan look like?

  • People would be able to buy less expensive plans from anywhere, particularly across state lines.
  • People would be able to buy catastrophic plans (real insurance) and set aside much more in tax-deferred medical savings accounts to use on out-of-pocket care.
  • People would very likely be able to buy noncancelable, portable policies to cover all unforeseen illnesses over the policyholder’s lifetime.
  • People would be able to leave costly coverage items off their policies — such as chiropractic or mental health — so that they could enjoy more affordable premiums.
  • People would not be encouraged by the tax code to get insurance through their employer.

What about babies born with serious conditions? Parents could buy policies to cover such problems prior to conception. What about parents whose genes predispose them to produce disabled offspring? They might have to pay more.

Of course, there will always be those who cannot or do not, for one reason or another, take such precautions. There is still a huge reservoir of charitable impulses and institutions in this country that could offer assistance. And these civil society organizations would be far more robust in a freer health care market.

The enemy of the good

Are these perfect solutions? By no means. Perfection is not possible, but market solutions compare very favorably to government solutions, especially over longer periods. Obamacare will continue to bring us unaccountable bureaucracies, shortages, rationing, discouraged doctors, and more.

Some imagine that prior to Obamacare, we had a free-market health insurance system, but the system was already severely hobbled by restrictions.

To name a few:

  • It was illegal to offer policies across state lines, which suppressed choices and increased prices, essentially cartelizing health insurance by state.
  • Employers were (and still are) given a tax break for providing health insurance (but not auto insurance) to their employees, reducing the incentive for covered employees to economize on health care while driving up prices for individual buyers. People stayed locked in jobs out of fear of losing health policies.
  • State regulators forbade policies that excluded certain coverage items, even if policyholders were amenable to such plans.
  • Many states made it illegal to price discriminate based on health status.
  • The law forbade associated health plans, which would allow organizations like churches or civic groups to pool risk and offer alternatives.
  • Medicaid and Medicare made up half of the health care system.

Of course, Obamacare fixed none of these problems.

Many voices are calling for the repeal of Obamacare, but few of those voices are offering the only solution that will work in the long term: complete separation of state and health care. That means no insurance regulation, no medical licensing, and ultimately, the abolition of Medicare and Medicaid, which threaten to wash future federal budgets in a sea of red ink.

Meanwhile, anything resembling real health insurance is illegal. And if you tried to offer it, they might throw you in jail.

Warren C. Gibson

Warren Gibson teaches engineering at Santa Clara University and economics at San Jose State University.

Reich Is Wrong on the Minimum Wage by DONALD BOUDREAUX

Watching Robert Reich’s new video in which he endorses raising the minimum wage by $7.75 per hour – to $15 per hour – is painful. It hurts to encounter such rapid-fire economic ignorance, even if the barrage lasts for only two minutes.

Perhaps the most remarkable flaw in this video is Reich’s manner of addressing the bedrock economic objection to the minimum wage – namely, that minimum wage prices some low-skilled workers out of jobs.

Ignoring supply-and-demand analysis (which depicts the correct common-sense understanding that the higher the minimum wage, the lower is the quantity of unskilled workers that firms can profitably employ), Reich asserts that a higher minimum wage enables workers to spend more money on consumer goods which, in turn, prompts employers to hire more workers.

Reich apparently believes that his ability to describe and draw such a “virtuous circle” of increased spending and hiring is reason enough to dismiss the concerns of “scare-mongers” (his term) who worry that raising the price of unskilled labor makes such labor less attractive to employers.

Ignore (as Reich does) that any additional amounts paid in total to workers mean lower profits for firms or higher prices paid by consumers – and, thus, less spending elsewhere in the economy by people other than the higher-paid workers.

Ignore (as Reich does) the extraordinarily low probability that workers who are paid a higher minimum wage will spend all of their additional earnings on goods and services produced by minimum-wage workers.

Ignore (as Reich does) the impossibility of making people richer simply by having them circulate amongst themselves a larger quantity of money.

(If Reich is correct that raising the minimum wage by $7.75 per hour will do nothing but enrich all low-wage workers to the tune of $7.75 per hour because workers will spend all of their additional earnings in ways that make it profitable for their employers to pay them an additional $7.75 per hour, then it can legitimately be asked: Why not raise the minimum wage to $150 per hour? If higher minimum wages are fully returned to employers in the form of higher spending by workers as Reich theorizes, then there is no obvious limit to the amount by which government can hike the minimum wage before risking an increase in unemployment.)

Focus instead on Reich’s apparent complete ignorance of the important concept of the elasticity of demand for labor.  This concept refers to the responsiveness of employers to changes in wage rates. It’s true that if employers’ demand for unskilled workers is “inelastic,” then a higher minimum wage would indeed put more money into the pockets of unskilled workers as a group. The increased pay of workers who keep their jobs more than offsets the lower pay of worker who lose their jobs. Workers as a group could then spend more in total.

But if employers’ demand for unskilled workers is “elastic,” then raising the minimum wage reduces, rather than increases, the amount of money in the pockets of unskilled workers as a group. When the demand for labor is elastic, the higher pay of those workers fortunate enough to keep their jobs is more than offset by the lower pay of workers who lose their jobs. So total spending by minimum-wage workers would likely fall, not rise.

By completely ignoring elasticity, Reich assumes his conclusion. That is, he simply assumes that raising the minimum wage raises the total pay of unskilled workers (and, thereby, raises the total spending of such workers).

Yet whether or not raising the minimum wage has this effect is among the core issues in the debate over the merits of minimum-wage legislation. Even if (contrary to fact) increased spending by unskilled workers were sufficient to bootstrap up the employment of such workers, raising the minimum wage might well reduce the total amount of money paid to unskilled workers and, thus, lower their spending.

So is employers’ demand for unskilled workers more likely to be elastic or inelastic? The answer depends on how much the minimum wage is raised. If it were raised by, say, only five percent, it might be inelastic, causing only a relatively few worker to lose their jobs and, thus, the total take-home pay of unskilled workers as a group to rise.

But Reich calls for an increase in the minimum wage of 107 percent! It’s impossible to believe that more than doubling the minimum wage would not cause a huge negative response by employers.

Such an assumption – if it described reality – would mean that unskilled workers are today so underpaid (relative to their productivity) that their employers are reaping gigantic windfall profits off of such workers.

But the fact that we see increasing automation of low-skilled tasks, as well as continuing high rates of unemployment of teenagers and other unskilled workers, is solid evidence that the typical low-wage worker is not such a bountiful source of profit for his or her employer.

Reich’s video is infected, from start to finish, with too many other errors to count.  I hope that other sensible people will take the time to expose them all.

Donald Boudreaux

Donald Boudreaux is a professor of economics at George Mason University, a former FEE president, and the author of Hypocrites and Half-Wits.

EDITORS NOTE: Here’s how Reich cherry-picked his data to claim that the minimum wage is “historically low” right now; here’s why Reich is wrong about wages “decoupling” from productivity; here’s why Reich is wrong about welfare “subsidizing” low-wage employers; here’s why Reich is wrong that Walmart raising wages proves that the minimum wage “works”; Reich is wrong (again) about who makes minimum wage; and here’s a collection of recent news about the damage minimum wage hikes have caused.

This post first appeared at Cato.org, while Cafe Hayek was down for repairs. 

Are Markets Ruining Video Games? Or is intellectual property the real culprit? by MATTHEW MCCAFFREY

Capitalism is ruining video games. So says game producer Lorne Lanning, creator of the Oddworld series, who recently sparked controversy by blasting economic developments in the gaming industry.

Lanning blames “capitalism” for gaming’s recent financial and artistic troubles, especially its emphasis on commercial success over artistic creativity. His basic claim is the same one levied against the film industry: major studios have been squeezing out their smaller competitors, taking advantage of market dominance to produce an endless stream of big-budget, artistically uninspiring sequels and spin-off franchises.

It’s unclear what Lanning (or anyone, really) means by capitalism, but he seems to be condemning the largely corporate world of game design and marketing. For instance, he mentions bureaucratic corporate structure, the quest for constant growth, and the need to appeal to mass markets as problems undermining the industry.

Several criticisms have been raised against Lanning’s claims.

First, he mainly seems upset about declining demand for the kinds of games he likes, so his arguments may be little more than sour grapes.

Second, markets produce to satisfy consumer wants, so if the artistic quality of games is low, isn’t that the fault of consumers’ tastes, rather than the market itself?

Third, without markets, there wouldn’t be a gaming industry. Markets increase productivity and make leisure possible, which in turn allows for the production of leisure goods like video games.

While there’s some truth to these criticisms, it’s important not to dismiss Lanning’s views as run-of-the-mill anti-market bias. In particular, we shouldn’t assume the game industry is a poster child for consumer sovereignty and healthy economic competition. In fact, what Lanning objects to sounds more like corporatism in the game industry than unregulated commerce; if so, it’s misguided to respond by defending game developers as heroic entrepreneurs or appealing to the wonders of the free market.

Lanning’s complaints may be justified, though he has misdiagnosed their cause: it’s actually regulation and a lack of markets that are hurting the game industry.

As it happens, major game studios have developed in ways we expect from firms artificially protected from competition: they’ve become less innovative, more risk-averse, and more focused on short-term gains. As Lanning puts it, in the gaming world, it’s not personalities and it’s not companies. It’s capitalism. So you get that [large] scale and now it gets more ruthless. These are public companies. This is Wall Street.

The analogy to Wall Street is telling, because the finance industry is at the heart of our heavily regulated and monopoly-privileged economy, and is probably the best example of what happens when government helps to eliminate market competition.

But what kind of intervention could be hampering competition in the gaming world?

One culprit is intellectual property (IP) law, which produces exactly the kind of problems Lanning is complaining about. Major studios spend a lot of money developing their IP, which they often license jealously. A case in point: Nintendo takes 40 percent of the ad revenue from YouTube videos featuring its games, a tactic that drives some creators away from their content.

Without noticing the irony, Lanning mentions several times the importance of retaining and nursing his own IP, all while protesting the sad state of small and medium-sized developers.

He may even have fallen prey to the anti-innovation incentive provided by IP, given that his recent projects have involved re-releasing classic titles rather than working on more ambitious (and uncertain!) projects. While IP law tends to favor the largest competitors, smaller firms can rely on it as well.

Ultimately, if developers want to pursue more artistic projects that appeal to smaller audiences, they need to take a step away from the one-size-fits-all corporate development supported by government regulation and toward genuine entrepreneurship and innovation.

If Lanning thought more about free exchange, he’d realize that markets produce exactly the kind of high-quality product he wants:

As craftsmen, our opportunity lies in finding the niches where we know our audience, we focus on it, we listen to it, we respect it, we treat it with some grace and … if you can keep mobilizing that audience, keep informing that audience, then how much is that worth?

It’s worth a lot. Yet, it’s markets that cater most effectively to diverse needs and niches, and it’s entrepreneurs who nurture value for consumers. Their success depends on it. We’re all better off when we turn our controllers over to the invisible hand.


Matthew McCaffrey

Matthew McCaffrey is assistant professor of enterprise at the University of Manchester and editor of Libertarian Papers.

Is the “Austrian School” a Lie?

Is Austrian economics an American invention? by STEVEN HORWITZ and B.K. MARCUS.

Do those of us who use the word Austrian in its modern libertarian context misrepresent an intellectual tradition?

We trace our roots back through the 20th century’s F.A. Hayek and Ludwig von Mises (both served as advisors to FEE) to Carl Menger in late 19th-century Vienna, and even further back to such “proto-Austrians” as Frédéric Bastiat and Jean-Baptiste Say in the earlier 19th century and Richard Cantillon in the 18th. Sometimes we trace our heritage all the way back to the late-Scholastic School of Salamanca.

Nonsense, says Janek Wasserman in his article “Austrian Economics: Made in the USA”:

“Austrian Economics, as it is commonly understood today,” Wasserman claims, “was born seventy years ago this month.”

As his title implies, Wasserman is not talking about the publication of Principles of Economics by Carl Menger, the founder of the Austrian school. That occurred 144 years ago in Vienna. What happened 70 years ago in the United States was the publication of F.A. Hayek‘s Road to Serfdom.

What about everything that took place — most of it in Austria — in the 74 years before Hayek’s most famous book? According to Wasserman, the Austrian period of “Austrian Economics” produced a “robust intellectual heritage,” but the largely American period that followed was merely a “dogmatic political program,” one that “does a disservice to the eclectic intellectual history” of the true Austrian school.

Where modern Austrianism is “associated with laissez-faire economics and libertarianism,” the real representatives of the more politically diverse tradition — economists from the University of Vienna, such as Fritz Machlup, Joseph Schumpeter, and Oskar Morgenstern — were embarrassed by their association with Hayek’s bestseller and its capitalistic supporters.

These “native-born Austrians ceased to be ‘Austrian,'” writes Wasserman, “when Mises and a simplified Hayek captured the imagination of a small group of businessmen and radicals in the US.”

Wasserman describes the popular reception of the as “the birth of a movement — and the reduction of a tradition.”

Are we guilty of Wasserman’s charges? Do modern Austrians misunderstand our own tradition, or worse yet, misrepresent our history?

In fact, Wasserman himself is guilty of a profound misunderstanding of the Austrian label, as well as the tradition it refers to.

The “Austrian school” is not a name our school of thought took for itself. Rather it was an insult hurled against Carl Menger and his followers by the adherents of the dominant German Historical School.

The Methodenstreit was a more-than-decade-long debate in the late 19th century among German-speaking social scientists about the status of economic laws. The Germans advocated methodological collectivism, espoused the efficacy of government intervention to improve the economy, and, according Jörg Guido Hülsmann, “rejected economic ‘theory’ altogether.”

The Mengerians, in contrast, argued for methodological individualism and the scientific validity of economic law. The collectivist Germans labeled their opponents the “Austrian school” as a put-down. It was like calling Menger and company the “backwater school” of economic thought.

“Austrian,” in our context, is a reclaimed word.

But more important, modern Austrian economics is not the dogmatic ideology that Wasserman describes. In his blog post, he provides no actual information about the work being done by the dozens of active Austrian economists in academia, with tenured positions at colleges and universities whose names are recognizable.

He tells his readers nothing about the  books they have produced that have been published by top university presses. He does not mention that they have published in top peer-reviewed journals in the economics discipline, as well as in philosophy and political science, or that the Society for the Development of Austrian Economics consistently packs meeting rooms at the Southern Economic Association meetings.

Have all of these university presses, top journals, and long-standing professional societies, not to mention tenure committees at dozens of universities, simply lost their collective minds and allowed themselves to be snookered by an ideological sleeper cell?

Or perhaps in his zeal to score ideological points of his own, Wasserman chose to take his understanding of Austrian economics from those who consume it on the Internet and elsewhere rather than doing the hard work of finding out what professional economists associated with the school are producing. Full of confirmation bias, he found what he “knew” was out there, and he ends up offering a caricature of the robust intellectual movement that is the contemporary version of the school.

The modern Austrian school, which has now returned to the Continent and spread across the globe after decades in America, is not the dogmatic monolith Wasserman contends. The school is alive with both internal debates about its methodology and theoretical propositions and debates about its relationship to the rest of the economics discipline, not to mention the size of the state.

Modern Austrian economists are constantly finding new ideas to mix in with the work of Menger, Böhm-Bawerk, Mises, and Hayek. The most interesting work done by Austrians right now is bringing in insights from Nobelists like James Buchanan, Elinor Ostrom, and Vernon Smith, and letting those marinate with their long-standing intellectual tradition. That is hardly the behavior of a “dogmatic political program,” but is rather a sign of precisely the robust intellectual tradition that has been at the core of Austrian economics from Menger onward.

That said, Wasserman is right to suggest that economic science is not the same thing as political philosophy — and it’s true that many self-described Austrians aren’t always careful to communicate the distinction. Again, Wasserman could have seen this point made by more thoughtful Austrians if he had gone to a basic academic source like the Concise Encyclopedia of Economics and read the entry on the Austrian school of economics.

Even a little bit of actual research motivated by actual curiosity about what contemporary professional economists working in the Austrian tradition are doing would have given Wasserman a very different picture of modern Austrian economics. That more accurate picture is one very much consistent with our Viennese predecessors.

To suggest that we do a disservice to our tradition — or worse, that we have appropriated a history that doesn’t belong to us — is to malign not just modern Austrians but also the Austrian-born antecedents within our tradition.

Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

Are Markets Myopic? The illusion of government looking out for the long term by ROBERT P. MURPHY

We often hear that individual investors are myopic. They make decisions based on a relatively short time horizon, so forget about the long run. That’s why we need government officials to step in with regulations, as well as corrective taxes and subsidies, to guide the market toward long-term social goals. Or so the story goes.

Though this view of markets versus government is common, it has things exactly backwards: markets do contain sophisticated mechanisms for rewarding long-term planning, and democratic political institutions encourage extremely short-term thinking.

The fundamental institution for promoting proper planning is private property. The owner of a piece of property has an incentive to take actions that enhance its market value. For example, consider the owner of a giant tin deposit who must decide how rapidly to extract the resource.

Those who are naïve about the operations of a market economy might suppose that the greedy capitalist owner would “strip mine” the deposit as quickly as possible, channeling all of the accessible tin into projects serving the current generation while ignoring the needs of future generations. A moment’s reflection shows this is nonsense.

The greedy capitalist owner is at least vaguely familiar with the notion that tin deposits — unlike apples and wheat — do not naturally replenish themselves year after year. An extra pound of tin extracted and sold this year means exactly one fewer pound of tin that this deposit can yield in some future year. Once we realize that the greedy capitalist doesn’t want to maximize revenue but instead wants to maximize market value, it is obvious that he must take the future into account when making current decisions.

Specifically, to maximize the market value of his asset, the owner should extract additional pounds of tin in the present (putting the proceeds in a financial investment earning the market rate of interest), until the point at which he would earn a greater return by leaving the next pound of tin in the deposit, to be sold next year at the expected market price. For example, if tin is selling today at $8 per pound, and the interest rate on financial assets is 10 percent, then the owner would halt his operations if he ever came to confidently expect the price of tin next year to be $8.80 or higher. (I’m assuming the marginal costs of extraction and selling are the same, year to year, just to keep things simple. See this article for a more comprehensive explanation using oil.) Once he reaches this point, the best “investment” of his additional units of tin would be to leave them in the mine, “ripening” for another year.

Thus we see that a greedy capitalist would implicitly (and unwittingly) take into account the desires of consumers next year when making current production decisions. He would be guided not by altruistic concern, but instead by personal enrichment. We see the familiar pattern of market prices guiding even selfish individuals into promoting the general welfare. If for some reason tin were expected to be scarcer in the future, then its expected spot price in the future would be higher. This would lead owners to hold tin off the market in the present, thus driving up its price even today, in anticipation of the expected future price. Modern financial and commodities markets — with futures and forward contracts, as well as more exotic derivatives — refine things even more, drawing on the dispersed knowledge and different risk appetites of millions of people.

The critics of capitalism would probably complain again at this point, bemoaning the fact that the greedy owner was now “undersupplying tin” and gouging today’s consumers with artificially higher prices. But if so, the critics need to make up their minds: do we want the tin going to the present or to the future? There’s a finite amount of it to go around — that’s the whole (alleged) problem.

Notice that even if a particular owner of a tin deposit is diagnosed with terminal cancer, he still has an incentive to behave in this “efficient” manner. The reason is that he can sell the tin deposit outright. The market value of the entire deposit will reflect the (present discounted) future flow of net income derived from owning the deposit and operating it in the optimal manner indefinitely. If the owner ever thinks, Well, if I had 10 years left, I would run the operation in such-and-such a way, then that decision won’t change just because he only has one year left. Instead, he can sell the operation to the highest bidder, including people who do have 10 or more years left of expected life.

Thus, we see that contrary to the critics, a pure market economy contains sophisticated mechanisms to guide owners into acting as farsighted stewards of depletable natural resources. In complete contrast, political officials who control natural resources face no such incentives. Because they can’t personally pocket the revenues, or bequeath the asset to their heirs, political officials have the incentive to maximize thecurrent income from the natural resources under their temporary control, to the extent that they are guided by pecuniary motives.

Even here, it’s usually not the case that the government sells access to a resource in order to maximize current receipts. Rather, what often happens is that the government officials will give sweetheart deals to private interests (such as a logging company operating in a state-owned forest), allowing these officials to develop a business relationship that will benefit them after leaving government.

Private owners in a free-market economy have the incentive to maximize the long-term value of their property, which implicitly leads them to consider the desires of future generations. Democratically elected government officials, on the other hand, act as temporary custodians who will not personally benefit from maintaining the market value of the assets they control.

Rental car companies would be foolish to suppose that their customers will put the more expensive high-octane gas into their vehicles, even though the customers might do so if they personally owned the rental car. Yet, for some reason, millions of voters think that politicians with two-year terms will be more farsighted when it comes to economic resources than private shareholders will be.

ABOUT ROBERT P. MURPHY

Robert P. Murphy has a PhD in economics from NYU. He is the author of The Politically Incorrect Guide to Capitalism and The Politically Incorrect Guide to The Great Depression and the New Deal. He is also the Senior Economist with the Institute for Energy Research and a Research Fellow at the Independent Institute. You can find him at http://consultingbyrpm.com/.