Tag Archive for: free markets

Should We Fear the Era of Driverless Cars or Embrace the Coming Age of Autopilot? by Will Tippens

Driving kills more than 30,000 Americans every year. Wrecks cause billions of dollars in damages. The average commuter spends nearly 40 hours a year stuck in traffic and almost five years just driving in general.

But there is light at the end of the traffic-jammed tunnel: the driverless car. Thanks to millions of dollars in driverless technology investment by tech giants like Google and Tesla, the era of road rage, drunk driving, and wasted hours behind the wheel could be left in a cloud of dust within the next two decades.

Despite the immense potential of self-driving vehicles, commentators are already dourly warning that such automation will produce undesirable effects. As political blogger Scott Santens warns,

Driverless vehicles are coming, and they are coming fast…. As close as 2025 — that is in a mere 10 years — our advancing state of technology will begin disrupting our economy in ways we can’t even yet imagine. Human labor is increasingly unnecessary and even economically unviable compared to machine labor.

The problem, Santens says, is that there are “over 10 million American workers and their families whose incomes depend entirely or at least partially on the incomes of truck drivers.” These professional drivers will face unemployment within the next two decades due to self-driving vehicles.

Does this argument sound familiar?

These same objections have sprung up at every major stage of technological innovation since the Industrial Revolution, from the textile-working Luddites destroying looming machines in the 1810s to taxi drivers in 2015 smashing Uber cars.

Many assume that any initial job loss accompanying new technology harms the economy and further impoverishes the most vulnerable, whether fast food workers or truck drivers. It’s true that losing a job can be an individual hardship, but are these same pundits ready to denounce the creation of the light bulb as an economic scourge because it put the candle makers out of business?

Just as blacksmithing dwindled with the decline of the horse-drawn buggy, economic demand for certain jobs waxes and wanes. Jobs arise and continue to exist for the sole reason of satisfying consumer demands, and the consumer’s demands are continuously evolving. Once gas heating devices became available, most people decided that indoor fires were dirtier, costlier, and less effective at heating and cooking, so they switched. While the change temporarily disadvantaged those in the chimney-sweeping business, the added value of the gas stove vastly improved the quality of life for everyone, chimney sweeps included.

There were no auto mechanics before the automobile and no web designers before the Internet. It is impossible to predict all the new employment opportunities a technology will create beforehand. Countless jobs exist today that were unthinkable in 1995 — and 20 years from now, people will be employed in ways we cannot yet begin to imagine, with the driverless car as a key catalyst.

The historical perspective doesn’t assuage the naysayers. If some jobs can go extinct, couldn’t all jobs go extinct?

Yes, every job we now know could someday disappear — but so what? Specific jobs may come and go, but that doesn’t mean we will ever see a day when labor is no longer demanded.

Economist David Ricardo demonstrated in 1817 that each person has a comparative advantage due to different opportunity costs. Each person is useful, and no matter how unskilled he or she may be, there will always be something that each person has a special advantage in producing. When this diversity of ability and interest is coupled with the infinite creativity of freely acting individuals, new opportunities will always arise, no matter how far technology advances.

Neither jobs nor labor are ends in themselves — they are mere means to the goal of wealth production. This does not mean that every person is concerned only with getting rich, but as Henry Hazlitt wrote in Economics in One Lesson, real wealth consists in what is produced and consumed: the food we eat, the clothes we wear, the houses we live in. It is railways and roads and motor cars; ships and planes and factories; schools and churches and theaters; pianos, paintings and hooks.

In other words, wealth is the ability to fulfill subjective human desires, whether that means having fresh fruit at your local grocery or being able to easily get from point A to point B. Labor is simply a means to these ends. Technology, in turn, allows labor to become far more efficient, resulting in more wealth diffused throughout society.

Everyone knows that using a bulldozer to dig a ditch in an hour is preferable to having a whole team of workers spend all day digging it by hand. The “surplus” workers are now available to do something else in which they can produce more highly valued goods and services.  Over time, in an increasingly specialized economy, productivity rises and individuals are able to better serve one another through mutually beneficial exchanges in the market. This ongoing process of capital accumulation is the key to all meaningful prosperity and the reason all of humanity has seen an unprecedented rise in wealth, living standards, leisure, and health in the past two centuries.

Technology is always uncertain going forward. Aldous Huxley warned in 1927 that jukeboxes would put live artists out of business. Time magazine predicted the computer would wreak economic chaos in the 1960s.

Today, on the cusp of one of the biggest innovations since the Internet, there is, predictably, similar opposition. But those who wring their hands at the prospect of the driverless car fail to see that its greatest potential lies not in reducing pollution and road deaths, nor in lowering fuel costs and insurance rates, but rather in its ability to liberate billions of hours of human potential that truckers, taxi drivers, and commuters now devote to focusing on the road.

No one can know exactly what the future will look like, but we know where we have been, and we know the principles of human flourishing that have guided us here.

If society is a car, trade is the engine — and technology is the gas. It drives itself. Enjoy the ride.

Will Tippens

Will Tippens is a recent law school graduate living in Memphis.

RELATED ARTICLES:

The Roads of the Future Are Made of Plastic

Apple co-founder: Robots to own people as their pets – English Pravda.RU

Capitalist Theory Is Better Than Socialist Reality by Sandy Ikeda

Tell someone on the left that crony capitalism is not the same as the free market and they’ll often respond that capitalism as it really exists is crony capitalism. They will say that there has never been an instance of capitalism in which government-sponsored or government-abetted cronyism didn’t play a substantial role — either through war, taxation, or slavery — in a market economy. As a result, the failings of crony capitalism — corruption, privilege, oppression, business cycles — are simply the failings of capitalism itself.

One correct response is to show that the less intervention there has been, the less corrupt, privileged, oppressive, and unstable the socioeconomic order also has been. Many would simply reiterate that, historically, laissez-faire capitalism has never existed, nor could it exist, without interventionism. They simply will not or cannot distinguish the free market from state capitalism, corporate capitalism, or other forms of the mixed economy.

Which is perhaps why some on the left have adopted the term “neoliberalism,” a perfectly good word that has come to represent an imbroglio of vaguely market-cum-corporativist views. They can’t imagine how markets could work without some form of state intervention holding it all together. And that’s probably because they reject what economist Peter Boettke calls “mainline economics,” or economics in the tradition of Adam Smith, Frédéric Bastiat, and Carl Menger, among others.

It’s frustrating, but there are two points I’d like to make. The first is that in our libertarian critiques of collectivism, we often make an argument that sounds similar to the one people on the left make. But, second, if libertarians are careful, they may be more justified in doing so.

What Is the Turnabout?

Most socialists today have abandoned their earlier claim that socialism generates greater material prosperity, but many on the left still insist that under a pure collectivist system, greater justice and equality would prevail. Socialism, in other words, is a far more humane socioeconomic order than capitalism.

How do libertarians respond to such a claim?

Sometimes we react with contempt or with disbelief that anyone could be so stupid or so evil or both as to argue such a thing. I hope no reader of theFreeman would react that way, although I’m afraid some do. Sometimes we react with slightly more civility by aiming our dismissive contempt not at the person but at the leftist ideas she holds. I will only say that we should take to heart what John Stuart Mill wrote in On Liberty about so-called bad ideas and opinions:

Every opinion which embodies somewhat of the portion of truth which the common opinion omits, ought to be considered precious, with whatever amount of error and confusion that truth may be blended.

There are other responses to the claim that socialism is more just and humane than capitalism, but I would like to focus on the one that I’ve often used: socialism in practice has always and everywhere tended to lead, to the degree that it is consistently applied, not to freedom and material well-being, but to tyranny and want. In other words, while socialism in theory may be all good things to all good people, the more government has practiced collectivism and central planning to achieve its goals of justice and equality, the farther it has fallen short of those goals. (And if you think countries such as Sweden are the exception, you might read my March 2013 Freeman article, “The New Swedish Model.”)

How is that different from the left’s position that legal privilege, oppression, and other problems are part and parcel of capitalism in practice? Each side seems to be arguing that the historical failings we’ve witnessed in each system are necessary to that system and not exceptions — features, not bugs.

A Possible Resolution

Clearly, the die-hard socialist and the die-hard libertarian argue from different fundamental principles. While there are many varieties of socialism, all are suspicious to a fairly high degree of private property, prices, and profit as the central ordering forces of society. Libertarians, too, are diverse, but I believe we all share strongly opposite views to those on the left on private property, prices, and profit as necessary (and for some libertarians, mistakenly I believe, sufficient) for a civil and prosperous society.

Socialists and indeed interventionists of all stripes also seem confident that the intentions of government authorities (especially those who have been elected) are virtuous enough and their knowledge reliable and complete enough to succeed in promoting the general welfare. In this, I think, it boils down to the underlying economics.

As a rule, libertarians use mainline economic theory to reach their conclusions about socialism and the perverse dynamics of interventionism. (There are, of course, ethical and philosophical approaches, as well.) And while interventionists and perhaps even some collectivists may believe that mainline economic theory does an okay job of framing some questions and of finding some answers to those questions, they also believe that mainline economics is far too limited to address a significant proportion of economic issues.

But the problem with such a view is that there’s no principled way to say in what circumstances mainline economics has failed. Sure, no theory of the economic system, mainline or otherwise, gets it right in every instance. We then have to look to historical evidence to clarify when, under what circumstances, and to what extent mainline economics holds up. And the historical evidence is indeed on the side of the libertarian interpretation of what collectivism and various degrees of central planning are, and of what laissez-faire capitalism is.

Indeed, the historical evidence overwhelmingly shows that social mobility, innovation, prosperity, per capita income, and per capita wealth are all tightly and positively correlated with economic freedom. And contrariwise, to the extent that economic freedom is lacking, social and economic stagnation, want, and shrinking civil rights have followed. (See, for example, the most recent publication of FreetheWorld.com.)

Someone might retort that correlation is not causation, and they would be right if there wasn’t a causal theory linking economic freedom with all those great things. But libertarians do have such a theory, and it’s called mainline economics.

Those on the left, however, don’t have a coherent theory of the mixed economy. Indeed, no such theory exists. There are several theories of so-called “market failure,” but they do not together constitute a coherent theory. What does exist is a critique of the mixed economy that is based on the realization that the ordering principle of the free market and the ordering principle of collectivist central planning are logically incompatible. One is based on open-ended entrepreneurial competition, the other on some form of constraining central planning. Interventionist approaches that attempt to combine them aren’t really systems at all. They are literally incoherent, and what makes them incoherent is the absence of a consistent ordering principle.

(My contribution to this volume [PDF] delves into this topic more deeply.)

Instead, what you’re left with, given the cognitive limits of the human mind and the spontaneous complexity of real-world systems, is expediency. Each problem is addressed not on the basis of principle, but in ad hoc fashion according to the prevailing interests of the moment. In the case of capitalism, while opportunism and cronyism do constantly pull in the direction of expediency, the force resisting that pull is entrepreneurial competition. That’s because cutting corners opens opportunities for one’s rivals to do a better job.  Moreover, that competition operates more effectively to resist and absorb all forms of intervention, crony or otherwise, the less interventionist the system is.

So while the form of the critiques of the left and of libertarians may sound similar, they are vastly different in substance.


Sandy Ikeda

Sandy Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism.

Amazon Liberates Readers: The Digital Era Creates Gardens without Gardeners by Stewart Dompe

Science fiction author Ursula K. Le Guin thinks Amazon represents everything that’s wrong with capitalism:

If you want to sell cheap and fast, as Amazon does, you have to sell big. Books written to be best sellers can be written fast, sold cheap, dumped fast: the perfect commodity for growth capitalism.

The readability of many best sellers is much like the edibility of junk food. Agribusiness and the food packagers sell us sweetened fat to live on, so we come to think that’s what food is. Amazon uses the BS Machine to sell us sweetened fat to live on, so we begin to think that’s what literature is.

She blames the online retailer for perpetuating a system that encourages authors to produce “sweetened fat” instead of the literature that nourishes the soul. She attacks the marketing of best seller lists (“BS lists”), and it would not be a mistake to infer that she believes these lists are comprised of an entirely different sort of “BS.” She writes:

Best Seller lists are generated by obscure processes, which I consider (perhaps wrongly) to consist largely of smoke, mirrors, hokum, and the profit motive. How truly the lists of Best Sellers reflect popularity is questionable.

If the literary world is a garden, then Amazon would be a gardener whose liberal use of fertilizer, Le Guin contends, has encouraged the growth of weeds. But her anger is misplaced. There is no gardener — and the garden is more beautiful than ever.

Spontaneous Order in the Book World

Amazon is a consequence, not the cause, of the digital revolution. More books are being published every year because it is now easier to become an author. Traditional publishers printed 316,480 new titles in 2010. That’s 100,000 more than they published in 2002, but this figure is dwarfed by the 2.7 million “nontraditional” titles that were published in 2010. The importance of publishing houses, bookstores, and critics has eroded because authors can now bypass these middlemen and sell ebooks directly to the public. All it takes is a website and some social-media savvy.

amazon quoteSome will argue that with this large increase in quantity, the weeds will start to outnumber the roses. The problem with this argument is that it misunderstands the market segmentation that is occurring. Simply put, what is a weed to one is a rose to another. Publishers need to sell a minimum number of books to recover the substantial fixed costs of printing. These financial pressures mean that even a well-written manuscript would be rejected if it were judged to appeal to too small an audience. As the cost of publishing has fallen, manuscripts that were previously rejected are now being published, and authors can now target smaller audiences. It is therefore unsurprising if readers find that most books conflict with their aesthetic preferences — they are not the intended audience.

Abraham Lincoln, Vampire Hunter will never sit on my parents’ nightstand. That is neither a tragedy nor unexpected, but to the people who love historical horror fiction, the world is a better place with that book in it. More writers can now pursue their dreams of becoming authors. The garden is growing larger and more diverse.

What Hath Marketing Wrought?

Le Guin is concerned about the influence of marketing in creating best seller lists. But even with a much larger budget than what book publishers have, Hollywood seems incapable of ensuring against $100 million bombs like Tomorrowland. Producers may broadly know what “the people” want, but that knowledge offers little guidance in ensuring a commercial success.

If you had told me a few years ago that one of the most popular book series in America, the Twilight saga, would be about a love triangle between a mopey teenage girl, a werewolf, and a centuries-old pedophile, I would have laughed in your face. Another best seller, Fifty Shades of Grey, started as Twilight fan fiction. In what smoke-filled room was it decided to sell erotica at Walmart?

Best sellers are an interesting phenomenon, because book consumption — once an intimate connection between reader and writer — has transformed into a widely shared social experience. These shared experiences create bonds between strangers. Art is a bridge that connects otherwise lonely islands of experience. When Mark Zuckerberg announced his book club, he was inviting countless strangers to join him in thinking and talking about the world.

Producing a best seller is harder than it looks. What sells or doesn’t sell — and what becomes the next breakout hit — is never the outcome of design. Writers and publishers experiment. Readers respond. Social media allows the cycle to accelerate, and sometimes the results can seem bewildering.

In this new era, more people are dedicating their lives to creating art. It is hard to find fault with either those pursuing their dreams or those paying them to do so. There are more books than we can read in a lifetime. If there is anything to regret, it is our pitifully short lives, not the literary bounty before us.

Le Guin is a brilliant novelist, but she fundamentally misunderstands the nature of the 21st-century market. The challenge now facing all readers is not to criticize the abundance of choices but to develop better filters for finding the literature that appeals to their interests. Luckily, Amazon has some recommendations you may be interested in viewing.

Stewart Dompe

Stewart Dompe is an instructor of economics at Johnson & Wales University. He has published articles in Econ Journal Watch and is a contributor to Homer Economicus: Using The Simpsons to Teach Economics.

Inequality: The Rhetoric and Reality by James A. Dorn

The publication of Thomas Piketty’s bestseller Capital in the Twenty-First Century has led to widespread attention on the rising gap between rich and poor, and to populist calls for government to redistribute income and wealth.

Purveyors of that rhetoric, however, overlook the reality that when the state plays a major role in leveling differences in income and wealth, economic freedom is eroded. The problem is, economic freedom is the true engine of progress for all people.

Income and wealth are created in the process of discovering and expanding new markets. Innovation and entrepreneurship extend the range of choices open to people. And yet not everyone is equal in their contribution to this process. There are differences among people in their abilities, motivations, and entrepreneurial talent, not to mention their life circumstances.

Those differences are the basis of comparative advantage and the gains from voluntary exchanges on private free markets. Both rich and poor gain from free markets; trade is not a zero- or negative-sum game.

Attacking the rich, as if they are guilty of some crime, and calling for state action to bring about a “fairer” distribution of income and wealth leads to an ethos of envy — certainly not one that supports the foundations of abundance: private property, personal responsibility, and freedom.

In an open market system, people who create new products and services prosper, as do consumers. Entrepreneurs create wealth and choices. The role of the state should be to safeguard rights to property and let markets flourish. When state power trumps free markets, choices are narrowed and opportunities for wealth creation are lost.

Throughout history, governments have discriminated against the rich, ultimately harming the poor. Central planning should have taught us that replacing private entrepreneurs with government bureaucrats merely politicizes economic life and concentrates power; it does not widen choices or increase income mobility.

Peter Bauer, a pioneer in development economics, recognized early on that “in a modern open society, the accumulation of wealth, especially great wealth, normally results from activities which extend the choices of others.”

Government has the power to coerce, but private entrepreneurs must persuade consumers to buy their products and convince investors to support their vision. The process of “creative destruction,” as described by Joseph Schumpeter, means that dynastic wealth is often short-lived.

Bauer preferred to use the term “economic differences” rather than “economic inequality.” He did so because he thought the former would convey more meaning than the latter. The rhetoric of inequality fosters populism and even extremism in the quest for egalitarian outcomes. In contrast, speaking of differences recognizes reality and reminds us that “differences in readiness to utilize economic opportunities — willingness to innovate, to assume risk, to organize — are highly significant in explaining economic differences in open societies.”

What interested Bauer was how to increase the range of choices open to people, not how to use government to reduce differences in income and wealth. As Bauer reminded us,

Political power implies the ability of rulers forcibly to restrict the choices open to those they rule. Enforced reduction or removal of economic differences emerging from voluntary arrangements extends and intensifies the inequality of coercive power.

Equal freedom under a just rule of law and limited government doesn’t mean that everyone will be equal in their endowments, motivations, or aptitudes. Disallowing those differences, however, destroys the driving force behind wealth creation and poverty reduction. There is no better example than China.

Under Mao Zedong, private entrepreneurs were outlawed, as was private property, which is the foundation of free markets. Slogans such as “Strike hard against the slightest sign of private ownership” allowed little room for improving the plight of the poor. The establishment of communes during the “Great Leap Forward” (1958–1961) and the centralization of economic decision making led to the Great Famine, ended civil society, and imposed an iron fence around individualism while following a policy of forced egalitarianism.

In contrast, China’s paramount leader Deng Xiaoping allowed the resurgence of markets and opened China to the outside world. Now the largest trading nation in the world, China has demonstrated that economic liberalization is the best cure for broadening people’s choices and has allowed hundreds of millions of people to lift themselves out of poverty.

Deng’s slogan “To get rich is glorious” is in stark contrast to Mao’s leveling schemes. In 1978, and as recently as 2002, there were no Chinese billionaires; today there are 220. That change would not have been possible without the development of China as a trading nation.

There are now 536 billionaires in the United States and growing animosity against the “1 percent” — especially by those who were harmed by the Great Recession. Nevertheless, polls have shown that most Americans think economic growth is far more important than capping the incomes of the very rich or narrowing the income gap. Only 3 percent of those polled by CBS and the New York Times in January thought that economic inequality was the primary problem facing the nation. Most Americans are more concerned with income mobility — that is, moving up the income ladder — then with penalizing success.

Regardless, some politicians will use inflammatory rhetoric to make differences between rich and poor the focus of their campaigns in the presidential election season. In doing so, they should recognize the risks that government intervention in the creation and distribution of income and wealth pose for a free society and for all-around prosperity.

Government policies can widen the gap between rich and poor through corporate welfare, through unconventional monetary policy that penalizes savers while pumping up asset prices, and through minimum wage laws and other legislation that price low-skilled workers out of the market and thus impede income mobility.

A positive program designed to foster economic growth — and leave people free to choose — by lowering marginal tax rates on labor and capital, reducing costly regulations, slowing the growth of government, and normalizing monetary policy would be the best medicine to benefit both rich and poor.


James A. Dorn

James A. Dorn is vice president for monetary studies, editor of the Cato Journal, senior fellow, and director of Cato’s annual monetary conference.

AMC’s “Halt and Catch Fire” Is Capitalism’s Finest Hour by Keith Farrell

AMC’s Halt and Catch Fire is a brilliant achievement. The show is a vibrant look at the emerging personal computer industry in the early 1980s. But more than that, the show is about capitalism, creative destruction, and innovation.

While we all know the PC industry changed the world, the visionaries and creators who brought us into the information age faced uncertainty over what their efforts would yield. They risked everything to build new machines and to create shaky start-ups. Often they failed and lost all they had.

HCF has four main characters: Joe, a visionary and salesman; Cameron, an eccentric programming geek; Gordon, a misunderstood engineering genius; and Gordon’s wife, Donna, a brilliant but unappreciated housewife and engineer.

The show pits programmers, hardware engineers, investors, big businesses, corporate lawyers, venture capitalists, and competing start-ups against each other and, at times, shows them having to cooperate to overcome mutual problems. The result is innovation.

Lee Pace gives an award-worthy performance as Joe MacMillan. The son of a never-present IBM tycoon and a negligent, drug addicted mother, Joe struggles with a host of mental and emotional problems. He’s a man with a brilliant mind and an amazing vision — but he has no computer knowledge or capabilities.

The series begins with his leaving a sales job at IBM in the hope of hijacking Cardiff Electric, a small Texas-based computer company, and launching it into the personal computing game.

As part of his scheme, he gets a low-level job at Cardiff where he recruits Gordon Clark, played by the equally talented Scoot McNairy. Enamored with Gordon’s prior writings on the potential for widespread personal computer use, Joe pleads with Gordon to reverse engineer an IBM-PC with him. The plot delves into the ethical ambiguities of intellectual property law as the two spend days reverse engineering the IBM BIOS.

While the show is fiction, it is inspired in part by the real-life events of Rod Canion, co-founder of Compaq. His book, Open: How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing serves as a basis for many of the events in the show’s first season.

In 1981, when Canion and his cohorts set out to make a portable PC, the market was dominated by IBM. Because IBM had rushed their IBM-PC to market, the system was made up entirely of off-the-shelf components and other companies’ software.

As a result, it was possible to buy those same components and software and build what was known as an IBM “clone.” But these clones were only mostlycompatible with IBM. While they could run DOS, they may or may not have run other programs written for IBM-PCs.

Because IBM dominated the market, all the best software was being written for IBMs. Canion wanted to build a computer that was 100 percent IBM compatible but cheaper — and portable enough to move from desk to desk.

Canion said in an interview on the Internet History Podcast, “We didn’t want to copy their computer! We wanted to have access to the software that was written for their computer by other people.”

But in order to do that, he and his team had to reverse-engineer the IBM BIOS. They couldn’t just steal or copy the code because it was proprietary technology, but they could figure out what function the code executed and then write their own code to handle the same task.

Canion explains:

What our lawyers told us was that not only can you not use [the copyrighted code], anybody that’s even looked at it — glanced at it — could taint the whole project. … We had two software people. One guy read the code and generated the functional specifications.

So it was like reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function.

In Halt and Catch Fire, Joe uses this idea to push Cardiff into making their own PC by intentionally leaking to IBM that he and Gordon had indeed reversed engineered the BIOS. They recruit a young punk-rock programmer named Cameron Howe to write their own BIOS.

While Gordon, Cameron, and Joe all believe that they are the central piece of the plan, the truth is that they all need each other. They also need to get the bosses and investors at Cardiff on their side in order to succeed, which is hard to do after infuriating them. The show demonstrates that for an enterprise to succeed you need to have cooperation between people of varying skill sets and knowledge bases — and between capital and labor.

The series is an exploration of the chaos and creative destruction that goes into the process of innovation. The beginning of the first episode explains the show’s title:

HALT AND CATCH FIRE (HCF): An early computer command that sent the machine into a race condition, forcing all instructions to compete for superiority at once. Control of the computer could be regained.

The show takes this theme of racing for superiority to several levels: the characters, the industry, and finally the economy and the world as a whole.

As Gordon himself declares of the cut-throat environment in which computer innovation occurs, “It’s capitalism at its finest!” HFC depicts Randian heroes: businessmen, entrepreneurs, and creators fight against all odds in a race to change the world.

Now into its second season, the show is exploring the beginnings of the internet, and Cameron is running her own start-up company, Mutiny. I could go on about the outstanding production quality, but the real novelty here is a show where capitalists, entrepreneurs, and titans of industry are regarded as heroic.

Halt and Catch Fire is a brilliant show, but it isn’t wildly popular. I fear it may soon be canceled, so be sure to check it out while it’s still around.


Keith Farrell

Keith Farrell is a freelance writer and political commentator.

Socialism Is War and War Is Socialism by Steven Horwitz

“[Economic] planning does not accidentally deteriorate into the militarization of the economy; it is the militarization of the economy.… When the story of the Left is seen in this light, the idea of economic planning begins to appear not only accidentally but inherently reactionary. The theory of planning was, from its inception, modeled after feudal and militaristic organizations. Elements of the Left tried to transform it into a radical program, to fit into a progressive revolutionary vision. But it doesn’t fit. Attempts to implement this theory invariably reveal its true nature. The practice of planning is nothing but the militarization of the economy.” — Don Lavoie, National Economic Planning: What Is Left?

Libertarians have long confounded our liberal and conservative friends by being both strongly in favor of free markets and strongly opposed to militarism and foreign intervention. In the conventional world of “right” and “left,” this combination makes no sense. Libertarians are often quick to point out the ways in which free trade, both within and across national borders, creates cooperative interdependencies among those who trade, thereby reducing the likelihood of war. The long classical liberal tradition is full of those who saw the connection between free trade and peace.

But there’s another side to the story, which is that socialism and economic planning have a long and close connection with war and militarization.

As Don Lavoie argues at length in his wonderful and underappreciated 1985 book National Economic Planning: What Is Left?, any attempt to substitute economic planning (whether comprehensive and central or piecemeal and decentralized) for markets inevitably ends up militarizing and regimenting the society. Lavoie points out that this outcome was not an accident. Much of the literature defending economic planning worked from a militaristic model. The “success” of economic planning associated with World War I provided early 20th century planners with a specific historical model from which to operate.

This connection should not surprise those who understand the idea of the market as a spontaneous order. As good economists from Adam Smith to F.A. Hayek and beyond have appreciated, markets are the products of human action but not human design. No one can consciously direct an economy. In fact, Hayek in particular argued that this is true not just of the economy, but of society in general: advanced commercial societies are spontaneous orders along many dimensions.

Market economies have no purpose of their own, or as Hayek put it, they are “ends-independent.” Markets are simply means by which people come together to pursue the various ends that each person or group has. You and I don’t have to agree on which goals are more or less important in order to participate in the market.

The same is true of other spontaneous orders. Consider language. We can both use English to construct sentences even if we wish to communicate different, or contradictory, things with the language.

One implication of seeing the economy as a spontaneous order is that it lacks a “collective purpose.” There is no single scale of values that guides us as a whole, and there is no process by which resources, including human resources, can be marshaled toward those collective purposes.

The absence of such a collective purpose or common scale of values is one factor that explains the connection between war and socialism. They share a desire to remake the spontaneous order of society into an organization with a single scale of values, or a specific purpose. In a war, the overarching goal of defeating the enemy obliterates the ends-independence of the market and requires that hierarchical control be exercised in order to direct resources toward the collective purpose of winning the war.

In socialism, the same holds true. To substitute economic planning for the market is to reorganize the economy to have a single set of ends that guides the planners as they allocate resources. Rather than being connected with each other by a shared set of means, as in private property, contracts, and market exchange, planning connects people by a shared set of ends. Inevitably, this will lead to hierarchy and militarization, because those ends require trying to force people to behave in ways that contribute to the ends’ realization. And as Hayek noted in The Road to Serfdom, it will also lead to government using propaganda to convince the public to share a set of values associated with some ends. We see this tactic in both war and socialism.

As Hayek also pointed out, this is an atavistic desire. It is a way for us to try to recapture the world of our evolutionary past, where we existed in small, homogeneous groups in which hierarchical organization with a common purpose was possible. Deep in our moral instincts is a desire to have the solidarity of a common purpose and to organize resources in a way that enables us to achieve it.

Socialism and war appeal to so many because they tap into an evolved desire to be part of a social order that looks like an extended family: the clan or tribe. Soldiers are not called “bands of brothers” and socialists don’t speak of “a brotherhood of man” by accident. Both groups use the same metaphor because it works. We are susceptible to it because most of our history as human beings was in bands of kin that were largely organized in this way.

Our desire for solidarity is also why calls for central planning on a smaller scale have often tried to claim their cause as the moral equivalent of war. This is true on both the left and right. We have had the War on Poverty, the War on Drugs, and the War on Terror, among others. And we are “fighting,” “combating,” and otherwise at war with our supposedly changing climate — not to mention those thought to be responsible for that change. The war metaphor is the siren song of those who would substitute hierarchy and militarism for decentralized power and peaceful interaction.

Both socialism and war are reactionary, not progressive. They are longings for an evolutionary past long gone, and one in which humans lived lives that were far worse than those we live today. Truly progressive thinking recognizes the limits of humanity’s ability to consciously construct and control the social world. It is humble in seeing how social norms, rules, and institutions that we did not consciously construct enable us to coordinate the actions of billions of anonymous actors in ways that enable them to create incredible complexity, prosperity, and peace.

The right and left do not realize that they are both making the same error. Libertarians understand that the shared processes of spontaneous orders like language and the market can enable all of us to achieve many of our individual desires without any of us dictating those values for others. By contrast, the right and left share a desire to impose their own sets of values on all of us and thereby fashion the world in their own images.

No wonder they don’t understand us.


Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

How Ice Cream Won the Cold War by B.K. Marcus

Richard Nixon stood by a lemon-yellow refrigerator in Moscow and bragged to the Soviet leader: “The American system,” he told Nikita Khrushchev over frosted cupcakes and chocolate layer cake, “is designed to take advantage of new inventions.”

It was the opening day of the American National Exhibition at Sokol’niki Park, and Nixon was representing not just the US government but also the latest products from General Mills, Whirlpool, and General Electric. Assisting him in what would come to be known as the “Kitchen Debates” were attractive American spokesmodels who demonstrated for the Russian crowd the best that capitalism in 1959 had to offer.

Capitalist lifestyle

“This was the first time,” writes British food historian Bee Wilson of the summer exhibition, that “many Russians had encountered the American lifestyle firsthand: the first time they … set eyes on big American refrigerators.”

Laughing and sometimes jabbing fingers at one another, the two men debated the merits of capitalism and communism. Which country had the more advanced technologies? Which way of life was better? The conversation … hinged not on weapons or the space race but on washing machines and kitchen gadgets. (Consider the Fork)

Khrushchev was dismissive. Yes, the Americans had brought some fancy machines with them, but did all this consumer technology actually offer any real advantages?

In his memoirs, he later recalled picking up an automatic lemon squeezer. “What a silly thing … Mr. Nixon! … I think it would take a housewife longer to use this gadget than it would for her to … slice a piece of lemon, drop it into a glass of tea, then squeeze a few drops.”

Producing necessities

That same year, Khrushchev announced that the Soviet economy would overtake the United States in the production of milk, meat, and butter. These were products that made sense to him. He couldn’t deliver — although Soviet farmers were forced to slaughter their breeding herds in an attempt to do so — but the goal itself reveals what the communist leader believed a healthy economy was supposed to do: produce staples like meat and dairy, not luxuries like colorful kitchenware and complex gadgetry for the decadent and lazy.

“Don’t you have a machine,” he asked Nixon, “that puts food in the mouth and presses it down? Many things you’ve shown us are interesting but they are not needed in life. They have no useful purpose. They are merely gadgets.”

Khrushchev was displaying the behavior Ludwig von Mises described in The Anti-Capitalistic Mentality. “They castigate the luxury, the stupidity and the moral corruption of the exploiting classes,” Mises wrote of the socialists. “In their eyes everything that is bad and ridiculous is bourgeois, and everything that is good and sublime is proletarian.”

On display that summer in Moscow was American consumer tech at its most bourgeois. The problem with “castigating the luxury,” as Mises pointed out, is that all “innovation is first a luxury of only a few people, until by degrees it comes into the reach of the many.”

Producing luxuries

It is appropriate that the Kitchen Debate over luxury versus necessity took place among high-end American refrigerators. Refrigeration, as a luxury, is ancient. “There were ice harvests in China before the first millennium BC,” writes Wilson. “Snow was sold in Athens beginning in the fifth century BC. Aristocrats of the seventeenth century spooned desserts from ice bowls, drank wine chilled with snow, and even ate iced creams and water ices. Yet it was only in the nineteenth century in the United States that ice became an industrial commodity.” Only with modern capitalism, in other words, does the luxury reach so rapidly beyond a tiny elite.

“Capitalism,” Mises wrote in Economic Freedom and Interventionism, “is essentially mass production for the satisfaction of the wants of the masses.”

The man responsible for bringing ice to the overheated multitude was a Boston businessman named Frederic Tudor. “History now knows him as ‘the Ice King,’” Steven Johnson writes of Tudor in How We Got to Now: Six Innovations That Made the Modern World, “but for most of his early adulthood he was an abject failure, albeit one with remarkable tenacity.”

Like many wealthy families in northern climes, the Tudors stored blocks of frozen lake water in icehouses, two-hundred-pound ice cubes that would remain marvelously unmelted until the hot summer months arrived, and a new ritual began: chipping off slices from the blocks to freshen drinks [and] make ice cream.

In 1800, when Frederic was 17, he accompanied his ill older brother to Cuba. They were hoping the tropical climate would improve his brother’s health, but it “had the opposite effect: arriving in Havana, the Tudor brothers were quickly overwhelmed by the muggy weather.” They reversed course, but the summer heat chased them back to the American South, and Frederic longed for the cooler climes of New England. That experience “suggested a radical — some would say preposterous — idea to young Frederic Tudor: if he could somehow transport ice from the frozen north to the West Indies, there would be an immense market for it.”

“In a country where at some seasons of the year the heat is almost unsupportable,” Tudor wrote in his journal, “ice must be considered as outdoing most other luxuries.”

Tudor’s folly

Imagine what an early 19th-century version of Khrushchev would have said to the future Ice King. People throughout the world go hungry, and you, Mr. Tudor, want to introduce frozen desserts to the tropics? What of beef? What of butter? The capitalists chase profits rather than producing the necessities.

It’s true that Tudor was pursuing profits, but his idea of ice outdoing “most other luxuries” looked to his contemporaries more like chasing folly than fortune.

The Boston Gazette reported on one of his first shiploads of New England ice: “No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.”

And at first the skeptics seemed right. Tudor “did manage to make some ice cream,” Johnson tells us. And that impressed a few of the locals. “But the trip was ultimately a complete failure.” The novelty of imported ice was just too novel. Why supply ice where there was simply no demand?

You can’t put a price on failure

In the early 20th century, economists Ludwig von Mises and F.A. Hayek, after years of debate with the Marxists, finally began to convince advocates of socialist central planning that market prices were essential to the rational allocation of scarce resources. Some socialist theorists responded with the idea of using capitalist market prices as a starting point for the central planners, who could then simulate the process of bidding for goods, thereby replacing real markets with an imitation that they believed would be just as good. Capitalism would then be obsolete, an unfortunate stage in the development of greater social justice.

By 1959, Khrushchev could claim, however questionably, that Soviet refrigerators were just as good as the American variety — except for a few frivolous features. But there wouldn’t have been any Soviet fridges at all if America hadn’t led the way in artificial refrigeration, starting with Tudor’s folly a century and a half earlier. If the central planners had been around in 1806 when the Boston Gazette poked fun at Tudor’s slippery speculation, what prices would they have used as the starting point for future innovation? All the smart money was in other ventures, and Tudor was on his way to losing his family’s fortune and landing in debtor’s prison.

Only through stubborn persistence did Tudor refine his idea and continue to innovate while demand slowly grew for what he had to offer.

“Still pursued by his creditors,” Johnson writes, Tudor

began making regular shipments to a state-of-the-art icehouse he had built in Havana, where an appetite for ice cream had been slowly maturing. Fifteen years after his original hunch, Tudor’s ice trade had finally turned a profit. By the 1820s, he had icehouses packed with frozen New England water all over the American South. By the 1830s, his ships were sailing to Rio and Bombay. (India would ultimately prove to be his most lucrative market.)

The world the Ice King made

In the winter of 1846–47, Henry David Thoreau watched a crew of Tudor’s ice cutters at work on Walden Pond.

Thoreau wrote, “The sweltering inhabitants of Charleston and New Orleans, of Madras and Bombay and Calcutta, drink at my well.… The pure Walden water is mingled with the sacred water of the Ganges.”

When Tudor died in 1864, Johnson tells us, he “had amassed a fortune worth more than $200 million in today’s dollars.”

The Ice King had also changed the fortunes of all Americans, and reshaped the country in the process. Khrushchev would later care about butter and beef, but before refrigerated train cars — originally cooled by natural ice — it didn’t matter how much meat and dairy an area could produce if it could only be consumed locally without spoiling. And only with the advent of the home icebox could families keep such products fresh. Artificial refrigeration created the modern city by allowing distant farms to feed the growing urban populations.

A hundred years after the Boston Gazette reported what turned out to be Tudor’s failed speculation, the New York Times would run a very different headline: “Ice Up to 40 Cents and a Famine in Sight”:

Not in sixteen years has New York faced such an iceless prospect as this year. In 1890 there was a great deal of trouble and the whole country had to be scoured for ice. Since then, however, the needs for ice have grown vastly, and a famine is a much more serious matter now than it was then.

“In less than a century,” Johnson observes, “ice had gone from a curiosity to a luxury to a necessity.”

The world that luxury made

Before modern markets, Mises tells us, the delay between luxury and necessity could take centuries, but “from its beginnings, capitalism displayed the tendency to shorten this time lag and finally to eliminate it almost entirely. This is not a merely accidental feature of capitalistic production; it is inherent in its very nature.” That’s why everyone today carries a smartphone — and in a couple of years, almost every wrist will bear a smartwatch.

The Cold War is over, and Khrushchev is no longer around to scoff, but the Kitchen Debate continues as the most visible commercial innovations produce “mere gadgets.” Less visible is the steady progress in the necessities, including the innovations we didn’t know were necessary because we weren’t imagining the future they would bring about. Even less evident are all the failures. We talk of profits, but losses drive innovation forward, too.

It’s easy to admire the advances that so clearly improve lives: ever lower infant mortality, ever greater nutrition, fewer dying from deadly diseases. It’s harder to see that the larger system of innovation is built on the quest for comfort, for entertainment, for what often looks like decadence. But the long view reveals that an innovator’s immediate goals don’t matter as much as the system that promotes innovation in the first place.

Even if we give Khrushchev the benefit of the doubt and assume that he really did care about feeding the masses and satisfying the most basic human needs, it’s clear the Soviet premier had no idea how economic development works. Progress is not driven by producing ever more butter; it is driven by ice cream.


B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

“Paid Family Leave” Is a Great Way to Hurt Women by Robert P. Murphy

In an article in the New Republic, Lauren Sandler argues that it’s about time the United States join the ranks of all other industrialized nations and provide legally guaranteed paid leave for pregnancy or illness.

Her arguments are similar to ones employed in the minimum wage debate. Opponents say that making particular workers more expensive will lead employers (on aggregate) to hire fewer of them. Supporters reject this tack as fearmongering, going so far as to claim such measures will boost profitability, and that only callous disregard for the disadvantaged can explain the opposition.

If paid leave (or higher pay for unskilled workers) helps workers and employers, then why do progressives need government power to force these great ideas on everyone?

The United States already has unpaid family leave, with the Family and Medical Leave Act (FMLA) signed into law by President Clinton in 1993. This legislation “entitles eligible employees … to take unpaid, job-protected leave for specified family and medical reasons with continuation of group health insurance coverage under the same terms and conditions as if the employee had not taken leave.” Specifically, the FMLA grants covered employees 12 workweeks of such protection in a 12-month period, to deal with a pregnancy, personal sickness, or the care of an immediate family member. (There is a provision for 26 workweeks if the injured family member is in the military.)

But “workers’ rights” advocates want to move beyond the FMLA, in winning legally guaranteed paid leave for such absences. Currently, California, New Jersey, and Rhode Island have such policies.

The basic libertarian argument against such legislation is simple enough: no worker has a right to any particular job, just as no employer has the right to compel a person to work for him or her. In a genuine market economy based on private property and consensual relations, employers and workers are legally treated as responsible adults to work out mutually beneficial arrangements. If it’s important to many women workers that they won’t forfeit their jobs in the event of a pregnancy, then in a free and wealthy society, many firms will provide such clauses in the employment contract in order to attract qualified applicants.

For example, if a 23-year-old woman with a fresh MBA is applying to several firms for a career in the financial sector, but she has a serious boyfriend and thinks they might one day start a family, then — other things equal — she is going to highly value a clause in the employment contract that guarantees she won’t lose her job if she takes off time to have a baby. Since female employment in the traditional workforce is now so prevalent, we can expect many employers to have such provisions in in their employment contracts in order to attract qualified applicants. Women don’t have a right to such clauses, just as male hedge-fund VPs don’t have a right to year-end bonuses, but it’s standard for employment contracts to have such features.

Leaving aside philosophical and ethical considerations, let’s consider basic economics and the consequences of pregnancy- and illness-leave legislation. It is undeniable that providing even unpaid, let alone paid, leave is a constraint on employers. Other things equal, an employer does not want an employee to suddenly not show up for work for months at a time, and then expect to come back as if nothing had happened. The employer has to scramble to deal with the absence in the meantime, and furthermore doesn’t want to pour too much training into a temporary employee because the original one is legally guaranteed her (or his) old job. If the employer also has to pay out thousands of dollars to an employee who is not showing up for work, it is obviously an extra burden.

As always with such topics, the easiest way to see the trade-off is to exaggerate the proposed measure. Suppose instead of merely guaranteeing a few months of paid maternity leave, instead the state enforced a rule that said, “Any female employee who becomes pregnant can take off up to 15 years, earning half of her salary, in order to deliver and homeschool the new child.” If that were the rule, then young female employees would be ticking time bombs, and potential employers would come up with all sorts of tricks to deny hiring them or to pay them very low salaries compared to their ostensible on-the-job productivity.

Now, just because guaranteed leave, whether paid or unpaid, is an expensive constraint for employers, that doesn’t mean such policies (in moderation) are necessarily bad business practices, so long as they are adopted voluntarily. To repeat, it is entirely possible that in a genuinely free market economy, many employers would voluntarily provide such policies in order to attract the most productive workers. After all, employers allow their employees to take bathroom breaks, eat lunch, and go on vacation, even though the employees aren’t generating revenue for the firm when doing so.

However, if the state must force employers to enact such policies, then we can be pretty sure they don’t make economic sense for the firms in question. In her article, Sandler addresses this fear by writing, in reference to New Jersey’s paid leave legislation,

After then-Governor Jon Corzine signed the bill, Chris Christie promised to overturn it during his campaign against Corzine. But Christie never followed through. The reason why is quite plain: As with California, most everyone loves paid leave. A recent study from the CEPR found that businesses, many of which strenuously opposed the policy, now believe paid leave has improved productivity and employee retention, decreasing turnover costs. (emphasis added)

Well, that’s fantastic! Rather than engaging in divisive political battles, why doesn’t Sandler simply email that CEPR (Center for Economic and Policy Research) study to every employer in the 47 states that currently lack paid leave legislation? Once they see that they are flushing money down the toilet right now with high turnover costs, they will join the ranks of the truly civilized nations and offer paid leave.

The quotation from Sandler is quite telling. Certain arguments for progressive legislation rely on “externalities,” where the profit-and-loss incentives facing individual consumers or firms do not yield the “socially optimal” behavior. On this issue of family leave, the progressive argument is much weaker. Sandler and other supporters must maintain that they know better than the owners of thousands of firms how to structure their employment contracts in order to boost productivity and employee retention. What are the chances of that?

In reality, given our current level of wealth and the configuration of our labor force, it makes sense for some firms to have generous “family leave” clauses for some employees, but it is not necessarily a sensible approach in all cases. The way a free society deals with such nuanced situations is to allow employers and employees to reach mutually beneficial agreements. If the state mandates an approach that makes employment more generous to women in certain dimensions — since they are the prime beneficiaries of pregnancy leave, even if men can ostensibly use it, too — then we can expect employers to reduce the attractiveness of employment contracts offered to women in other dimensions. There is no such thing as a free lunch. Mandating paid leave will reduce hiring opportunities and base pay, especially for women. If this trade-off is something the vast majority of employees want, then that’s the outcome a free labor market would have provided without a state mandate.


Robert P. Murphy

Robert P. Murphy is senior economist with the Institute for Energy Research. He is author of Choice: Cooperation, Enterprise, and Human Action (Independent Institute, 2015).

Labor Unions Create Unemployment: It’s a Feature, Not a Bug by Sarah Skwire

Did the labor unions goof, or did they get exactly what they want?

Los Angeles has approved a minimum wage hike to $15 an hour. Some of the biggest supporters of that increase were the labor unions. But now that the increase has been approved, the unions are fighting to exempt union labor from that wage hike.

Over at Anything Peaceful, Dan Bier has nicely explained why the unions would do something that seems, at first glance, so nonsensical. But what I want to point out is that this kind of hijinks is not a new invention of 21st century organized labor. Instead, it’s pretty much what labor was organized to do. It’s a feature, not a bug.

Part of the early reasoning for the minimum wage — which originated as a “family wage” or “living wage” — was its intent to allow a worker to “keep his wife and children out of competition with himself” and presumably to keep all other women out of the workforce as well.

Similarly, the labor movement, from the very beginning, meant to protect organized white male labor from competition against black labor, immigrant labor, female labor, and nonunion labor. There are subtleties to this generalization, of course, and labor historian Ruth Milkman identifies four historical waves of the labor movement that have differing commitments (and a lack thereof) to a more diverse vision of labor rights. But unions — like so many other institutions — work on the “get up and bar the door” principle. Get up as high as you can, and then bar the door behind you against any further entrants who might cut into the goodies you have grabbed for yourself.

Labor union expert Charles Baird notes,

Unions depend on capture. They try to capture employers by cutting them off from alternative sources of labor; they try to capture workers by eliminating union-free employment alternatives; and they try to capture customers by eliminating union-free producers. Successful capture generates monopoly gains for unions.

Protection is the name of the game.

Unsurprisingly, the unions made sure to be involved when, about 50 years before the 1970s push for an equal rights amendment, there was another push for an ERA in the United States. Written by suffragist leader Alice Paul, the amendment was an attempt to leverage the newly recognized voting power of women into a policy that guaranteed men and women shall have equal rights throughout the United States and every place under its jurisdiction.” This amendment would have prevented various gender-based inequities that the courts supported at the time — like hugely different hourly wages for male and female workers, limits on the number of hours women could work, limits on when women could work (night shifts were seen as particularly dangerous for women’s health and welfare), and limits on the kinds of work women could do.

Reporting on the debates over the ERA in 1924, Doris Stevens noted three main objections to the amendment:

First, there was the familiar plea for gradual, rather than sweeping change.

Second, there were concerns over lost pensions for widows and mothers.

And in Stevens’s words,

The final objection says: Grant political, social, and civil equality to women, but do not give equality to women in industry.… Here lies the heart of the whole controversy. It is not astonishing, but very intelligent indeed, that the battle should center on the point of woman’s right to sell her labor on the same terms as man. For unless she is able equally to compete, to earn, to control, and to invest her money, unless in short woman’s economic position is made more secure, certainly she cannot establish equality in fact. She will have won merely the shadow of power without essential and authentic substance.

Suffragist Rheta Childe Dorr (in Good Housekeeping, of all places. How the mighty have fallen!) pointed out again the logic behind labor’s opposition to the equal rights amendment:

The labor unions are most opposed to this law, for few unions want women to advance in skilled trades. The Women’s Trade Union League, controlled and to a large extent supported by the men’s unions, opposes it. Of course, the welfare organizations oppose it, for it frees women wage earners from the police power of the old laws. But I pray that public opinion, especially that of the club women, will support it. It’s the first law yet proposed that gives working women a man’s chance industrially. “No men’s labor unions, no leisure class women, no uniformed legislators have a right to govern our lives without our consent,” the women declare, and I think they are dead right about it.

Organized labor — founded to ensure the collective right to contract — refused to stand up for the right of individual women to contract. From their point of view, it was only sensible. And, perhaps most importantly, women in organized labor refused to stand up for the women outside the unions.

Organized male and female labor’s fight against the ERA was at least as much about protectionism as it was about sexism. Maybe more. Women’s rights and union activist Ethel M. Smith attended the debates on the ERA to report on it for the Life and Labor Bulletin, and found that union workers did not even attempt to gloss over their protectionist agenda:

Miss Mary Goff of the International Ladies’ Garment Workers Union, emphasized the seriousness of the effect upon organized establishments were legal restrictions upon hours of labor removed from the unorganized. “The organized women workers,” she said, “need the labor laws to protect them from the competition of the unorganized. Where my union, for instance, may have secured for me a 44-hour week, how long could they maintain it if there were unlimited hours for other workers? Unfortunately, there are hundreds of thousands of unorganized working women in New York who would undoubtedly be working 10 hours a day but for the 9-hour law of New York.”

So labor unions excluded women as long as they could, then let in a privileged few and barred the doors behind them. And they continue to use the same tactics today in LA and elsewhere.

How long can they keep it up?


Sarah Skwire

Sarah Skwire is a senior fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.

Who Should Choose? Patients and Doctors or the FDA? by Doug Bandow

Good ideas in Congress rarely have a chance. Rep. Fred Upton (R-Mich.) is sponsoring legislation to speed drug approvals, but his initial plan was largely gutted before he introduced it last month.

Congress created the Food and Drug Administration in 1906, long before prescription drugs became such an important medical treatment. The agency became an omnibus regulatory agency, controlling everything from food to cosmetics to vitamins to pharmaceuticals. Birth defects caused by the drug Thalidomide led to the 1962 Kefauver-Harris Amendments which vastly expanded the FDA’s powers. The new controls did little to improve patient safety but dramatically slowed pharmaceutical approvals.

Those who benefit the most from drugs often complain about the cost since pills aren’t expensive to make. However, drug discovery is an uncertain process. Companies consider between 5,000 and 10,000 substances for every one that ends up in the pharmacy. Of those only one-fifth actually makes money—and must pay for the entire development, testing, and marketing processes.

As a result, the average per drug cost exceeds $1 billion, most often thought to be between $1.2 and $1.5 billion. Some estimates run more.

Naturally, the FDA insists that its expensive regulations are worth it. While the agency undoubtedly prevents some bad pharmaceuticals from getting to market, it delays or blocks far more good products.

Unfortunately, the political process encourages the agency to kill with kindness. Let a drug through which causes the slightest problem, and you can expect television special reports, awful newspaper headlines, and congressional hearings. Stop a good drug and virtually no one notices.

It took the onset of AIDS, then a death sentence, to force the FDA to speed up its glacial approval process. No one has generated equivalent pressure since. Admitted Richard Merrill, the agency’s former chief counsel:  “No FDA official has ever been publicly criticized for refusing to allow the marketing of a drug.”

By 1967 the average delay in winning approval of a new drug had risen from seven to 30 months after the passage of Kefauver-Harris. Approval time now is estimated to run as much as 20 years.

While economist Sam Peltzman figured that the number of new drugs approved dropped in half after Kefauver-Harris, there was no equivalent fall in the introduction of ineffective or unsafe pharmaceuticals. All the Congress managed to do was strain out potentially life-saving products.

After all, a company won’t make money selling a medicine that doesn’t work. And putting out something dangerous is a fiscal disaster. Observed Peltzman:  the “penalties imposed by the marketplace on sellers of ineffective drugs prior to 1962 seem to have been enough of a deterrent to have left little room for improvement by a regulatory agency.”

Alas, the FDA increases the cost of all medicines, delays the introduction of most pharmaceuticals, and prevents some from reaching the market. That means patients suffer and even die needlessly.

The bureaucracy’s unduly restrictive approach plays out in other bizarre ways. Once a drug is approved doctors may prescribe it for any purpose, but companies often refuse to go through the entire process again to win official okay for another use. Thus, it is common for AIDS, cancer, and pediatric patients to receive off-label prescriptions. However, companies cannot advertise these safe, effective, beneficial uses.

Congress has applied a few bandages over the years. One was to create a process of user fees through the Prescription Drug User Fee Act. Four economists, Tomas Philipson, Ernst Berndt, Adrian Gottschalk, and Matthew Strobeck, figured that drugmakers gained between $11 billion and $13 billion and consumers between $5 billion and $19 billion. Total life years saved ranged between 180,000 and 310,000. But lives continue to be lost because the approval process has not been accelerated further.

Criticism and pressure did lead to creation of a special FDA procedure for “Accelerated Approval” of drugs aimed at life-threatening conditions. This change, too, remains inadequate. Nature Biotechnology noted that few medicines qualified and “in recent years, FDA has been ratcheting up the requirements.”

The gravely ill seek “compassionate access” to experimental drugs. Some patients head overseas unapproved treatments are available. The Wall Street Journal reported on those suffering from Lou Gehrig’s disease who, “frustrated by the slow pace of clinical drug trials or unable to qualify, are trying to brew their own version of an experimental compound at home and testing it on themselves.”

Overall, far more people die from no drugs than from bad drugs. Most pharmaceutical problems involve doctors misprescribing or patients misusing medicines. The deadliest pre-1962 episode involved Elixir Sulfanilamide and killed 107 people. (Thalidomide caused some 10,000 birth defects, but no deaths.) Around 3500 users died from Isoproterenol, an asthmatic inhaler. Vioxx was blamed for a similar number of deaths, though the claim was disputed. Most of the more recent incidents would not have been prevented from a stricter approval process.

The death toll from agency delays is much greater. Drug analyst Dale Gieringer explained:  “The benefits of FDA regulation relative to that in foreign countries could reasonably be put at some 5,000 casualties per decade or 10,000 per decade for worst-case scenarios.  In comparison … the cost of FDA delay can be estimated at anywhere from 21,000 to 120,000 lives per decade.”

According to the Competitive Enterprise Institute, among the important medicines delayed were ancrod, beta-blockers, citicoline, ethyol, femara, glucophage, interleukin-2, navelbine, lamictal, omnicath, panorex, photofrin, prostar, rilutek, taxotere, transform, and vasoseal.

Fundamental reform is necessary. The FDA should be limited to assessing safety, with the judgment as to efficacy left to the marketplace. Moreover, the agency should be stripped of its approval monopoly. As a start drugs approved by other industrialized states should be available in America.

The FDA’s opinion also should be made advisory. Patients and their health care providers could look to private certification organizations, which today are involved in everything from building codes to electrical products to kosher food. Medical organizations already maintain pharmaceutical databases and set standards for treatments with drugs. They could move into drug testing and assessment.

No doubt, some people would make mistakes. But they do so today. With more options more people’s needs would be better met. Often there is no single correct treatment decision. Ultimately the patient’s preference should control.

Congress is arguing over regulatory minutiae when it should be debating the much more basic question: Who should decide who gets treated how? Today the answer is Uncle Sam. Tomorrow the answer should be all of us.

Doug Bandow

Doug Bandow is a senior fellow at the Cato Institute and the author of a number of books on economics and politics. He writes regularly on military non-interventionism.

Capitalism Defused the Population Bomb by Chelsea German

Journalists know that alarmism attracts readers. An article in the British newspaper the Independent titled, “Have we reached ‘peak food’? Shortages loom as global production rates slow” claimed humanity will soon face mass starvation.

Just as Paul Ehrlich’s 1968 bestseller The Population Bomb  predicted that millions would die due to food shortages in the 1970s and 1980s, the article in 2015 tries to capture readers’ interest through unfounded fear. Let’s take a look at the actual state of global food production.

The alarmists cite statistics showing that while we continue to produce more and more food every year, the rate of acceleration is slowing down slightly. The article then presumes that if the rate of food production growth slows, then widespread starvation is inevitable.

This is misleading. Let us take a look at the global trend in net food production, per person, measured in 2004-2006 international dollars. Here you can see that even taking population growth into account, food production per person is actually increasing:

Food is becoming cheaper, too. As K.O. Fuglie and S. L. Wang showed in their 2012 article “New Evidence Points to Robust but Uneven Productivity Growth in Global Agriculture,” food prices have been declining for over a century, in spite of a recent uptick:

In fact, people are better nourished today than they ever have been, even in poor countries. Consider how caloric consumption in India increased despite population growth:

Given that food is more plentiful than ever, what perpetuates the mistaken idea that mass hunger is looming? The failure to realize that human innovation, through advancing technology and the free market, will continue to rise to meet the challenges of growing food demand.

In the words of HumanProgress.org Advisory Board member Matt Ridley, “If 6.7 billion people continue to keep specializing and exchanging and innovating, there’s no reason at all why we can’t overcome whatever problems face us.”

This idea first appeared at Cato.org.

Health Insurance Is Illegal by Warren C. Gibson

Health insurance is a crime. No, I’m not using a metaphor. I’m not saying it’s a mess, though it certainly is that. I’m saying it’s illegal to offer real health insurance in America. To see why, we need to understand what real insurance is and differentiate that from what we currently have.

Real insurance

Life is risky. When we pool our risks with others through insurance policies, we reduce the financial impact of unforeseen accidents or illness or premature death in return for a premium we willingly pay. I don’t regret the money I’ve spent on auto insurance during my first 55 years of driving, even though I’ve yet to file a claim.

Insurance originated among affinity groups such as churches or labor unions, but now most insurance is provided by large firms with economies of scale, some organized for profit and some not. Through trial and error, these companies have learned to reduce the problems of adverse selection and moral hazard to manageable levels.

A key word above is unforeseen.

If some circumstance is known, it’s not a risk and therefore cannot be the subject of genuine risk-pooling insurance. That’s why, prior to Obamacare, some insurance companies insisted that applicants share information about their physical condition. Those with preexisting conditions were turned down, invited to high-risk pools, or offered policies with higher premiums and higher deductibles.

Insurers are now forbidden to reject applicants due to preexisting conditions or to charge them higher rates.

They are also forbidden from charging different rates due to different health conditions — and from offering plans that exclude certain coverage items, many of which are not “unforeseen.”

In other words, it’s illegal to offer real health insurance.

Word games

Is all this just semantics? Not at all. What currently passes for health insurance in America is really just prepaid health care — on a kind of all-you-can-consume buffet card. The system is a series of cost-shifting schemes stitched together by various special interests. There is no price transparency. The resulting overconsumption makes premiums skyrocket, and health resources get misallocated relative to genuine wants and needs.

Lessons

Some lessons here are that genuine health insurance would offer enormous cost savings to ordinary people — and genuine benefits to policyholders. These plans would encourage thrift and consumer wisdom in health care planning,  while discouraging the overconsumption that makes prepaid health care unaffordable.

At this point, critics will object that private health insurance is a market failure because the refusal of unregulated private companies to insure preexisting conditions is a serious problem that can only be remedied by government coercion. The trouble with such claims is that no one knows what a real health insurance market would generate, particularly as the pre-Obamacare regime wasn’t anything close to being free.

What might a real, free-market health plan look like?

  • People would be able to buy less expensive plans from anywhere, particularly across state lines.
  • People would be able to buy catastrophic plans (real insurance) and set aside much more in tax-deferred medical savings accounts to use on out-of-pocket care.
  • People would very likely be able to buy noncancelable, portable policies to cover all unforeseen illnesses over the policyholder’s lifetime.
  • People would be able to leave costly coverage items off their policies — such as chiropractic or mental health — so that they could enjoy more affordable premiums.
  • People would not be encouraged by the tax code to get insurance through their employer.

What about babies born with serious conditions? Parents could buy policies to cover such problems prior to conception. What about parents whose genes predispose them to produce disabled offspring? They might have to pay more.

Of course, there will always be those who cannot or do not, for one reason or another, take such precautions. There is still a huge reservoir of charitable impulses and institutions in this country that could offer assistance. And these civil society organizations would be far more robust in a freer health care market.

The enemy of the good

Are these perfect solutions? By no means. Perfection is not possible, but market solutions compare very favorably to government solutions, especially over longer periods. Obamacare will continue to bring us unaccountable bureaucracies, shortages, rationing, discouraged doctors, and more.

Some imagine that prior to Obamacare, we had a free-market health insurance system, but the system was already severely hobbled by restrictions.

To name a few:

  • It was illegal to offer policies across state lines, which suppressed choices and increased prices, essentially cartelizing health insurance by state.
  • Employers were (and still are) given a tax break for providing health insurance (but not auto insurance) to their employees, reducing the incentive for covered employees to economize on health care while driving up prices for individual buyers. People stayed locked in jobs out of fear of losing health policies.
  • State regulators forbade policies that excluded certain coverage items, even if policyholders were amenable to such plans.
  • Many states made it illegal to price discriminate based on health status.
  • The law forbade associated health plans, which would allow organizations like churches or civic groups to pool risk and offer alternatives.
  • Medicaid and Medicare made up half of the health care system.

Of course, Obamacare fixed none of these problems.

Many voices are calling for the repeal of Obamacare, but few of those voices are offering the only solution that will work in the long term: complete separation of state and health care. That means no insurance regulation, no medical licensing, and ultimately, the abolition of Medicare and Medicaid, which threaten to wash future federal budgets in a sea of red ink.

Meanwhile, anything resembling real health insurance is illegal. And if you tried to offer it, they might throw you in jail.

Warren C. Gibson

Warren Gibson teaches engineering at Santa Clara University and economics at San Jose State University.

Paul Krugman: Three Wrongs Don’t Make a Right by ROBERT P. MURPHY

One of the running themes throughout Paul Krugman’s public commentary since 2009 is that his Keynesian model — specifically, the old IS-LM framework — has done “spectacularly well” in predicting the major trends in the economy. Krugman actually claimed at one point that he and his allies had been “right about everything.” In contrast, Krugman claims, his opponents have been “wrong about everything.”

As I’ll show, Krugman’s macro predictions have been wrong in three key areas. So, by his own criterion of academic truth, Krugman’s framework has been a failure, and he should consider it a shame that people still seek out his opinion.

Modeling interest rates: the zero lower bound

Krugman’s entire case for fiscal stimulus rests on the premise that central banks can get stuck in a “liquidity trap” when interest rates hit the “zero lower bound” (ZLB). As long as nominal interest rates are positive, Krugman argued, the central bank could always stimulate more spending by loosening monetary policy and cutting rates further. These actions would boost aggregate demand and help restore full employment. In such a situation, there was no case for Keynesian deficit spending as a means to create jobs.

However, Krugman said that this conventional monetary policy lost traction early in the Great Recession once nominal short-term rates hit (basically) 0 percent. At that point, central banks couldn’t stimulate demand through open-market operations, and thus the government had to step in with a large fiscal stimulus in the form of huge budget deficits.

As is par for the course, Krugman didn’t express his views in a tone of civility or with humility. No, Krugman wrote things like this in response to Gary Becker:

Urp. Gack. Glug. If even Nobel laureates misunderstand the issue this badly, what hope is there for the general public? It’s not about the size of the multiplier; it’s about the zero lower bound….

And the reason we’re all turning to fiscal policy is that the standard rule, which is that monetary policy plus automatic stabilizers should do the work of smoothing the business cycle, can’t be applied when we’re hard up against the zero lower bound.

I really don’t know why this is so hard to understand. (emphasis added)

But then, in 2015, things changed: various bonds in Europe began exhibiting negative nominal yields. Here’s how liberal writer Matt Yglesias — no right-wing ideologue — described this development in late February:

Indeed, the interest rate situation in Europe is so strange that until quite recently, it was thought to be entirely impossible. There was a lot of economic theory built around the problem of the Zero Lower Bound — the impossibility of sustained negative interest rates…. Paul Krugman wrote a lot of columns about it. One of them said “the zero lower bound isn’t a theory, it’s a fact, and it’s a fact that we’ve been facing for five years now.”

And yet it seems the impossible has happened. (emphasis added)

Now this is quite astonishing, the macroeconomic analog of physicists accelerating particles beyond the speed of light. If it turns out that the central banks of the world had more “ammunition” in terms of conventional monetary policy, then even on its own terms, the case for Keynesian fiscal stimulus becomes weaker.

So what happened with this revelation? Once he realized he had been wrong to declare so confidently that 0 percent was a lower bound on rates, did Krugman come out and profusely apologize for putting so much of his efforts into pushing fiscal stimulus rather than further rate cuts, since the former were a harder sell politically?

Of course not. This is how Krugman first dealt with the subject in early March when it became apparent that the “ZLB” was a misnomer:

We now know that interest rates can, in fact, go negative; those of us who dismissed the possibility by saying that people could simply hold currency were clearly too casual about it. But how low?

Then, after running through other people’s estimates, Krugman wrapped up his post by saying, “And I am pinching myself at the realization that this seemingly whimsical and arcane discussion is turning out to have real policy significance.”

Isn’t that cute? The foundation for the Keynesian case for fiscal stimulus rests on an assumption that interest rates can’t go negative. Then they do go negative, and Krugman is pinching himself that he gets to live in such exciting times. I wonder, is that the reaction Krugman wanted from conservative economists when interest rates failed to spike despite massive deficits — namely, that they would just pinch themselves to see that their wrong statements about interest rates were actually relevant to policy?

I realize some readers may think I’m nitpicking here, because (thus far) it seems that maybe central banks can push interest rates only 50 basis points or so beneath the zero bound. Yet, in practice, that result would still be quite significant, if we are operating in the Keynesian framework. It’s hard to come up with a precise estimate, but using the Taylor Principle in reverse, and then invoking Okun’s Law, a typical Keynesian might agree that the Fed pushing rates down to –0.5 percent, rather than stopping at 0 percent, would have reduced unemployment during the height of the recession by 0.5 percentage points.

That might not sound like a lot, but it corresponds to about 780,000 workers. For some perspective, in February 2013, Krugman estimated that the budget sequester would cost about 700,000 jobs, and classified it as a “fiscal doomsday machine” and “one of the worst policy ideas in our nation’s history.” So if my estimate is in the right ballpark, then on his own terms, Krugman should admit that his blunder — in thinking the Fed couldn’t push nominal interest rates below 0 percent — is one of the worst mistakes by an economist in US history. If he believes his own model and rhetoric, Krugman should be doing a lot more than pinching himself.

Modeling growth: fiscal stimulus and budget austerity

Talk of the so-called “sequester” leads into the next sorry episode in Krugman’s track record: he totally botched his forecasts of US economic growth (and employment) after the turn to (relative) US fiscal restraint. Specifically, in April 2013, Krugman threw down the gauntlet, arguing that we were being treated to a test between the Keynesian emphasis on fiscal policy and the market monetarist emphasis on monetary policy. Guys like Mercatus Center monetary economist Scott Sumner had been arguing that the Fed could offset Congress’s spending cuts, while Krugman — since he was still locked into the “zero lower bound” and “liquidity trap” mentality — said that this was wishful thinking. That’s why Krugman had labeled the sequester a “fiscal doomsday machine,” after all.

As it turned out, the rest of 2013 delivered much better economic news than Krugman had been expecting. Naturally, the market monetarists were running victory laps by the end of the year. Then, in a move that would embarrass anybody else, in January 2014 Krugman had the audacity to wag his finger at Sumner for thinking that the previous year’s economy was somehow a test of Keynesian fiscal stimulus versus market monetarist monetary stimulus. Yes, you read that right: back in April 2013 when the economy was doing poorly, Krugman said 2013 would be a good test of the two viewpoints. Then, when he failed the test he himself had set up, Krugman complained that it obviously wasn’t a fair test, because all sorts of other things can occur to offset the theoretical impacts. (I found the episode so inspiring that I wrote a play about it.)

Things became even more comical by the end of 2014, when it was clear that the US economy — at least according to conventional metrics like the official unemployment rate and GDP growth — was doing much better than Krugman’s doomsday rhetoric would have anticipated. At this point, rather than acknowledging how wrong his warnings about US “austerity” had been, Krugman inconceivably tried to claim victory — by arguing that all of the conservative Republican warnings about Obamacare had been wrong.

This rhetorical move was so shameless that not just anti-Keynesians like Sumner but even progressives had to cry foul. Specifically, Jeffrey Sachs wrote a scathing article showcasing Krugman’s revisionism:

For several years…Paul Krugman has delivered one main message to his loyal readers: deficit-cutting “austerians” (as he calls advocates of fiscal austerity) are deluded. Fiscal retrenchment amid weak private demand would lead to chronically high unemployment. Indeed, deficit cuts would court a reprise of 1937, when Franklin D. Roosevelt prematurely reduced the New Deal stimulus and thereby threw the United States back into recession.

Well, Congress and the White House did indeed play the austerian card from mid-2011 onward. The federal budget deficit has declined from 8.4% of GDP in 2011 to a predicted 2.9% of GDP for all of 2014.…

Krugman has vigorously protested that deficit reduction has prolonged and even intensified what he repeatedly calls a “depression” (or sometimes a “low-grade depression”). Only fools like the United Kingdom’s leaders (who reminded him of the Three Stooges) could believe otherwise.

Yet, rather than a new recession, or an ongoing depression, the US unemployment rate has fallen from 8.6% in November 2011 to 5.8% in November 2014. Real economic growth in 2011 stood at 1.6%, and theIMF expects it to be 2.2% for 2014 as a whole. GDP in the third quarter of 2014 grew at a vigorous 5% annual rate, suggesting that aggregate growth for all of 2015 will be above 3%.

So much for Krugman’s predictions. Not one of his New York Timescommentaries in the first half of 2013, when “austerian” deficit cutting was taking effect, forecast a major reduction in unemployment or that economic growth would recover to brisk rates. On the contrary, “the disastrous turn toward austerity has destroyed millions of jobs and ruined many lives,”he argued, with the US Congress exposing Americans to “the imminent threat of severe economic damage from short-term spending cuts.” As a result, “Full recovery still looks a very long way off,” he warned. “And I’m beginning to worry that it may never happen.”

I raise all of this because Krugman took a victory lap in his end-of-2014 column on “The Obama Recovery.” The recovery, according to Krugman, has come not despite the austerity he railed against for years, but because we “seem to have stopped tightening the screws….”

That is an incredible claim. The budget deficit has been brought down sharply, and unemployment has declined. Yet Krugman now says that everything has turned out just as he predicted. (emphasis added)

In the face of such withering and irrefutable criticism, Krugman retreated to the position that his wonderful model had been vindicated by the bulk of the sample, with scatterplots of European countries and their respective fiscal stance and growth rates. He went so far as to say that Sachs “really should know better” than to have expected Krugman’s predictions about austerity to actually hold for any given country (such as the United States).

Besides the audacity of downplaying the confidence with which he had warned of the “fiscal doomsday machine” that would strike the United States, Krugman’s response to Sachs also drips with hypocrisy. Krugman has been merciless in pointing to specific economists (including yours truly) who were wrong in their predictions about consumer price inflation in the United States. When we botched a specific call about the US economy for a specific time period, that was enough in Krugman’s book for us to quit our day jobs and start delivering pizza. There was no question that getting things wrong about one specific country was enough to discredit our model of the economy. The fact that guys like me clung to our policy views after being wrong about our predictions on the United States showed that not only were we bad economists, but we were evil (and possibly racist), too.

Modeling consumer price inflation

I’ve saved the best for last. The casual reader of Krugman’s columns would think that the one area where he surely wiped the deck with his foes was on predictions of consumer price inflation. After all, plenty of anti-Keynesians like me predicted that the consumer price index (among other prices) would rise rapidly, and we were wrong. So Krugman’s model did great on this criterion, right?

Actually, no, it didn’t; his model was totally wrong as well. You see, coming into the Great Recession, Krugman’s framework of “the inflation-adjusted Phillips curve predict[ed] not just deflation, but accelerating deflation in the face of a really prolonged economic slump” (emphasis Krugman’s). And it wasn’t merely the academic model predicting (price) deflation; Krugman himself warned in February 2010 that the United States could experience price deflation in the near future. He ended with, “Japan, here we come” — a reference to that country’s long bout with actual consumer price deflation.

Well, that’s not what happened. About seven months after he warned of continuing price disinflation and the possibility of outright deflation, Krugman’s preferred measures of CPI turned around sharply, more than doubling in a short period, returning almost to pre-recession levels.

Conclusion

Krugman, armed with his Keynesian model, came into the Great Recession thinking that (a) nominal interest rates can’t go below 0 percent, (b) total government spending reductions in the United States amid a weak recovery would lead to a double dip, and (c) persistently high unemployment would go hand in hand with accelerating price deflation. Because of these macroeconomic views, Krugman recommended aggressive federal deficit spending.

As things turned out, Krugman was wrong on each of the above points: we learned (and this surprised me, too) that nominal rates could go persistently negative, that the US budget “austerity” from 2011 onward coincided with a strengthening recovery, and that consumer prices rose modestly even as unemployment remained high. Krugman was wrong on all of these points, and yet his policy recommendations didn’t budge an iota over the years.

Far from changing his policy conclusions in light of his model’s botched predictions, Krugman kept running victory laps, claiming his model had been “right about everything.” He further speculated that the only explanation for his opponents’ unwillingness to concede defeat was that they were evil or stupid.

What a guy. What a scientist.


Robert P. Murphy

Robert P. Murphy is senior economist with the Institute for Energy Research. He is author of Choice: Cooperation, Enterprise, and Human Action (Independent Institute, 2015).

Reich Is Wrong on the Minimum Wage by DONALD BOUDREAUX

Watching Robert Reich’s new video in which he endorses raising the minimum wage by $7.75 per hour – to $15 per hour – is painful. It hurts to encounter such rapid-fire economic ignorance, even if the barrage lasts for only two minutes.

Perhaps the most remarkable flaw in this video is Reich’s manner of addressing the bedrock economic objection to the minimum wage – namely, that minimum wage prices some low-skilled workers out of jobs.

Ignoring supply-and-demand analysis (which depicts the correct common-sense understanding that the higher the minimum wage, the lower is the quantity of unskilled workers that firms can profitably employ), Reich asserts that a higher minimum wage enables workers to spend more money on consumer goods which, in turn, prompts employers to hire more workers.

Reich apparently believes that his ability to describe and draw such a “virtuous circle” of increased spending and hiring is reason enough to dismiss the concerns of “scare-mongers” (his term) who worry that raising the price of unskilled labor makes such labor less attractive to employers.

Ignore (as Reich does) that any additional amounts paid in total to workers mean lower profits for firms or higher prices paid by consumers – and, thus, less spending elsewhere in the economy by people other than the higher-paid workers.

Ignore (as Reich does) the extraordinarily low probability that workers who are paid a higher minimum wage will spend all of their additional earnings on goods and services produced by minimum-wage workers.

Ignore (as Reich does) the impossibility of making people richer simply by having them circulate amongst themselves a larger quantity of money.

(If Reich is correct that raising the minimum wage by $7.75 per hour will do nothing but enrich all low-wage workers to the tune of $7.75 per hour because workers will spend all of their additional earnings in ways that make it profitable for their employers to pay them an additional $7.75 per hour, then it can legitimately be asked: Why not raise the minimum wage to $150 per hour? If higher minimum wages are fully returned to employers in the form of higher spending by workers as Reich theorizes, then there is no obvious limit to the amount by which government can hike the minimum wage before risking an increase in unemployment.)

Focus instead on Reich’s apparent complete ignorance of the important concept of the elasticity of demand for labor.  This concept refers to the responsiveness of employers to changes in wage rates. It’s true that if employers’ demand for unskilled workers is “inelastic,” then a higher minimum wage would indeed put more money into the pockets of unskilled workers as a group. The increased pay of workers who keep their jobs more than offsets the lower pay of worker who lose their jobs. Workers as a group could then spend more in total.

But if employers’ demand for unskilled workers is “elastic,” then raising the minimum wage reduces, rather than increases, the amount of money in the pockets of unskilled workers as a group. When the demand for labor is elastic, the higher pay of those workers fortunate enough to keep their jobs is more than offset by the lower pay of workers who lose their jobs. So total spending by minimum-wage workers would likely fall, not rise.

By completely ignoring elasticity, Reich assumes his conclusion. That is, he simply assumes that raising the minimum wage raises the total pay of unskilled workers (and, thereby, raises the total spending of such workers).

Yet whether or not raising the minimum wage has this effect is among the core issues in the debate over the merits of minimum-wage legislation. Even if (contrary to fact) increased spending by unskilled workers were sufficient to bootstrap up the employment of such workers, raising the minimum wage might well reduce the total amount of money paid to unskilled workers and, thus, lower their spending.

So is employers’ demand for unskilled workers more likely to be elastic or inelastic? The answer depends on how much the minimum wage is raised. If it were raised by, say, only five percent, it might be inelastic, causing only a relatively few worker to lose their jobs and, thus, the total take-home pay of unskilled workers as a group to rise.

But Reich calls for an increase in the minimum wage of 107 percent! It’s impossible to believe that more than doubling the minimum wage would not cause a huge negative response by employers.

Such an assumption – if it described reality – would mean that unskilled workers are today so underpaid (relative to their productivity) that their employers are reaping gigantic windfall profits off of such workers.

But the fact that we see increasing automation of low-skilled tasks, as well as continuing high rates of unemployment of teenagers and other unskilled workers, is solid evidence that the typical low-wage worker is not such a bountiful source of profit for his or her employer.

Reich’s video is infected, from start to finish, with too many other errors to count.  I hope that other sensible people will take the time to expose them all.

Donald Boudreaux

Donald Boudreaux is a professor of economics at George Mason University, a former FEE president, and the author of Hypocrites and Half-Wits.

EDITORS NOTE: Here’s how Reich cherry-picked his data to claim that the minimum wage is “historically low” right now; here’s why Reich is wrong about wages “decoupling” from productivity; here’s why Reich is wrong about welfare “subsidizing” low-wage employers; here’s why Reich is wrong that Walmart raising wages proves that the minimum wage “works”; Reich is wrong (again) about who makes minimum wage; and here’s a collection of recent news about the damage minimum wage hikes have caused.

This post first appeared at Cato.org, while Cafe Hayek was down for repairs. 

Real Heroes: Ludwig Erhard — The Man Who Made Lemonade from Lemons by LAWRENCE W. REED

How rare and refreshing it is for the powerful to understand the limitations of power, to actually repudiate its use and, in effect, give it back to the myriad individuals who make up society. George Washington was such a person. Cicero was another. So was Ludwig Erhard, who did more than any other man or woman to denazify the German economy after World War II. By doing so, he gave birth to a miraculous economic recovery.

“In my eyes,” Erhard confided in January 1962, “power is always dull, it is dangerous, it is brutal and ultimately even dumb.”

By every measure, Germany was a disaster in 1945 — defeated, devastated, divided, and demoralized — and not only because of the war. The Nazis, of course, were socialist (the name derives from National Socialist German Workers Party), so for more than a decade, the economy had been “planned” from the top. It was tormented with price controls, rationing, bureaucracy, inflation, cronyism, cartels, misdirection of resources, and government command of important industries. Producers made what the planners ordered them to. Service to the state was the highest value.

Thirty years earlier, a teenage Ludwig Erhard heard his father argue for classical-liberal values in discussions with fellow businessmen. A Bavarian clothing and dry goods entrepreneur, the elder Wilhelm actively opposed the kaiser’s increasing cartelization of the German economy. Erhard biographer Alfred C. Mierzejewski writes of Ludwig’s father,

While by no means wealthy, he became a member of the solid middle class that made its living through hard work and satisfying the burgeoning consumer demand of the period, rather than by lobbying for government subsidies or protection as many Junkers did to preserve their farms and many industrialists did to fend off foreign competition.

Young Ludwig resented the burdens that government imposed on honest and independent businessmen like his father. He developed a lifelong passion for free market competition because he understood what F.A. Hayek would express so well in the 1940s: “The more the state plans, the more difficult planning becomes for the individual.”

Severely wounded by an Allied artillery shell in Belgium in 1918, Ludwig’s liberal values were strengthened by his experience in the bloody and futile First World War. After the tumultuous hyperinflation that gripped Germany in the years after the war, he earned a PhD in economics, took charge of the family business, and eventually headed a marketing research institute, which gave him opportunities to write and speak about economic issues.

Hitler’s rise to power in the 1930s deeply disturbed Erhard. He refused to have anything to do with Nazism or the Nazi Party, even quietly supporting resistance to the regime as the years wore on. The Nazis saw to it that he lost his job in 1942, when he wrote a paper outlining his ideas for a free, postwar economy. He spent the next few years as a business consultant.

In 1947, Erhard achieved the chairmanship of an important monetary commission. It proved to be a vital stepping stone to the position of director of economics for the Bizonal Economic Council, a creation of the American and British occupying authorities. It was there that he could finally put his views into policy and transform his country in the process.

Erhard’s beliefs had by this time solidified into unalterable convictions. Currency must be sound and stable. Collectivism was deadly nonsense that choked the creative individual. Central planning was a ruse and a delusion. State enterprises could never be an acceptable substitute for the dynamism of competitive, entrepreneurial markets. Envy and wealth redistribution were evils.

“It is much easier to give everyone a bigger piece from an ever growing cake,” he said, “than to gain more from a struggle over the division of a small cake, because in such a process every advantage for one is a disadvantage for another.”

Erhard advocated a fair field and no favors. His prescription for recovery? The state would set the rules of the game and otherwise leave people alone to wrench the German economy out of its doldrums. The late economist William H. Peterson reveals what happened next:

In 1948, on a June Sunday, without the knowledge or approval of the Allied military occupation authorities (who were of course away from their offices), West German Economics Minister Ludwig Erhard unilaterally and bravely issued a decree wiping out rationing and wage-price controls and introducing a new hard currency, the Deutsche-mark. The decree was effective immediately. Said Erhard to the stunned German people: “Now your only ration coupon is the mark.”

The American, British, and French authorities, who had appointed Erhard to his post, were aghast. Some charged that he had exceeded his defined powers, that he should be removed. But the deed was done. Said U.S. Commanding General Lucius Clay: “Herr Erhard, my advisers tell me you’re making a terrible mistake.” “Don’t listen to them, General,” Erhard replied, “my advisers tell me the same thing.”

General Clay protested that Erhard had “altered” the Allied price-control program, but Erhard insisted he hadn’t altered price controls at all. He had simply “abolished” them. In the weeks and months to follow, he issued a blizzard of deregulatory orders. He slashed tariffs. He raised consumption taxes, but more than offset them with a 15 percent cut in income taxes. By removing disincentives to save, he prompted one of the highest saving rates of any Western industrialized country. West Germany was awash in capital and growth, while communist East Germany languished. Economist David Henderson writes that Erhard’s motto could have been: “Don’t just sit there;undo something.”

The results were stunning. As Robert A. Peterson writes,

Almost immediately, the German economy sprang to life. The unemployed went back to work, food reappeared on store shelves, and the legendary productivity of the German people was unleashed. Within two years, industrial output tripled. By the early 1960s, Germany was the third greatest economic power in the world. And all of this occurred while West Germany was assimilating hundreds of thousands of East German refugees.

It was a pace of growth that dwarfed that of European countries that received far more Marshall Plan aid than Germany ever did.

The term “German economic miracle” was widely used and understood as it happened in the 1950s before the eyes of the world, but Erhard himself never thought of it as such. In his 1958 book, Prosperity through Competition, he opined, “What has taken place in Germany … is anything but a miracle. It is the result of the honest efforts of a whole people who, in keeping with the principles of liberty, were given the opportunity of using personal initiative and human energy.”

The temptations of the welfare state in the 1960s derailed some of Erhard’s reforms. His three years as chancellor (1963–66) were less successful than his tenure as an economics minister. But his legacy was forged in that decade and a half after the war’s end. He forever answered the question, “What do you do with an economy in ruins?” with the simple, proven and definitive recipe: “Free it.”

For additional information, see:

David R. Henderson on the “German Economic Miracle
Alfred C. Mierzejewski’s Ludwig Erhard: A Biography
Robert A. Peterson on “Origins of the German Economic Miracle
Richard Ebeling on “The German Economic Miracle and the Social Market Economy
William H. Peterson on “Will More Dollars Save the World?

Lawrence W. Reed

Lawrence W. (“Larry”) Reed became president of FEE in 2008 after serving as chairman of its board of trustees in the 1990s and both writing and speaking for FEE since the late 1970s.

EDITORS NOTE: Each week, Mr. Reed will relate the stories of people whose choices and actions make them heroes. See the table of contents for previous installments.