Tag Archive for: Economic HIstory

No, Slavery Did Not Make America Rich

The historical record of the post-war economy demonstrates slavery was neither a central driving force of, or economically necessary for, American economic dominance. 


In 1847, Karl Marx wrote that

Without slavery you have no cotton; without cotton you have no modern industry…cause slavery to disappear and you will have wiped America off the map of nations.

As with most of his postulations concerning economics, Marx was proven wrong.

Following the Civil War and the abolition of slavery in 1865, historical data show there was a recession, but after that, post-war economic growth rates rivaled or surpassed the pre-war growth rates, and America continued on its path to becoming the number one political and economic superpower, ultimately superseding Great Britain (see Appendix Figure 1).

The historical record of the post-war economy, one would think, obviously demonstrated slavery was neither a central driving force of, or economically necessary for, American economic dominance, as Marx thought it was. And yet, somehow, even with the benefit of hindsight, there are many academics and media pundits still echoing Marx today.

For instance, in his essay published by The New York Times’ 1619 Project, Princeton sociologist Matthew Desmond claims the institution of slavery “helped turn a poor, fledgling nation into a financial colossus.”

“The industrial revolution was based on cotton, produced primarily in the slave labor camps of the United States,” Noam Chomsky similarly stated in an interview with the Times. Both claims give the impression that slavery was essential for industrialization and/or American economic hegemony, which is untrue.

The Industrial Revolution paved the way for modern economic development and is widely regarded to have occurred between 1760 and 1830, starting in Great Britain and subsequently spreading to Europe and the US.

As depicted in Figure 1., raw cotton produced by African-American slaves did not become a significant import in the British economy until 1800, decades after the Industrial Revolution had already begun.

Although the British later imported large quantities of American cotton, economic historians Alan L. Olmstead and Paul W. Rhode note that “the American South was a late-comer to world cotton markets,” and  “US cotton played no role in kick-starting the Industrial Revolution.”

Nor was the revolution sparked by Britain’s involvement with slavery more broadly, as David Eltis and Stanley L. Engerman assessed that the contribution of British 18th-century slave systems to industrial growth was “not particularly large.”

There is also the theory that the cotton industry, dependent on slavery, triggered industrialization in the northern United States by facilitating the growth of textile industries. But as demonstrated by Kenneth L. Sokoloff, the Northern manufacturing sector was incredibly dynamic, and productivity growth was broad-based and in no way exclusive to cotton textiles.

Eric Holt has further elaborated, pointing out that

the vast literature on the industrial revolution that economic historians have produced shows that it originated in the creation and adoption of a wide range of technologies, such as the steam engine and coke blast furnace, which were not directly connected to textile trading networks.

The bodies of the enslaved served as America’s largest financial asset, and they were forced to maintain America’s most exported commodity… the profits from cotton propelled the US into a position as one of the leading economies in the world and made the South its most prosperous region.

This is the argument made by P.R. Lockhart of Vox.

While slavery was an important part of the antebellum economy, claims about its central role in the Industrial Revolution and in America’s rise to power via export-led growth are exaggerated.

Olmstead and Rhode have observed that although cotton exports comprised a tremendous share of total exports prior to the Civil War, they accounted for only around 5 percent of the nation’s overall gross domestic product, an important contribution but not the backbone of American economic development (see Appendix Figure 2).

One can certainly argue that slavery made the slaveholders and those connected to the cotton trade extremely wealthy in the short run, but the long-run impact of slavery on overall American economic development, particularly in the South, is undeniably and unequivocally negative.

As David Meyer of Brown University explains, in the pre-war South, “investments were heavily concentrated in slaves,” resulting in the failure “to build a deep and broad industrial infrastructure,” such as railroads, public education, and a centralized financial system.

Economic historians have repeatedly emphasized that slavery delayed Southern industrialization, giving the North a tremendous advantage in the Civil War.

Harvard economist Nathan Nunn has shown that across the Americas, the more dependent on slavery a nation was in 1750, the poorer it was in 2000 (see Appendix Figure 3.). He found the same relationship in the US. In 2000, states with more slaves in 1860 were poorer than states with fewer slaves and much poorer than the free Northern states (see Appendix Figure 4.)

According to Nunn,

looking either across countries within the Americas, or across states and counties within the U.S., one finds a strong significant negative relationship between past slave use and current income.

Slavery was an important part of the American economy for some time, but the reality is that it was completely unnecessary and stunted economic development, and it made Americans poorer even over 150 years later.

The historical and empirical evidence is in accordance with the conclusion of Olmstead and Rhode—that slavery was

a national tragedy that…inhibited economic growth over the long run and created social and racial divisions that still haunt the nation.

Figure 1. US share of British Cotton Imports over time

Figure 2. Cotton Exports and Gross Domestic Product

Figure 3. Partial correlation plot between the slave population as a share of the total population in 1750 and national income per capita in 2000 of countries of the Americas

Figure 4. Bivariate plot showing the relationship between the slave population as a share of the total population in 1860 and state incomes per capita in 2000

AUTHOR

Corey Iacono

Corey Iacono is a Master of Business graduate student at the University of Rhode Island with a bachelor’s degree in Pharmaceutical Science and a minor in Economics.

EDITORS NOTE: This FEE column is republished with permission. ©All rights reserved.

Capitalism Is Good for the Poor by Steven Horwitz

Critics frequently accuse markets and capitalism of making life worse for the poor. This refrain is certainly common in the halls of left-leaning academia as well as in broader intellectual circles. But like so many other criticisms of capitalism, this one ignores the very real, and very available, facts of history.

Nothing has done more to lift humanity out of poverty than the market economy. This claim is true whether we are looking at a time span of decades or of centuries. The number of people worldwide living on less than about two dollars per day today is less than half of what it was in 1990. The biggest gains in the fight against poverty have occurred in countries that have opened up their markets, such as China and India.

If we look over the longer historical period, we can see that the trends today are just the continuation of capitalism’s victories in beating back poverty. For most of human history, we lived in a world of a few haves and lots of have-nots. That slowly began to change with the advent of capitalism and the Industrial Revolution. As economic growth took off and spread throughout the population, it created our own world in the West in which there are a whole bunch of haves and a few have-more-and-betters.

For example, the percentage of American households below the poverty line who have basic appliances has grown steadily over the last few decades, with poor families in 2005 being more likely to own things like a clothes dryer, dishwasher, refrigerator, or air conditioner than the average household was in 1971. And consumer items that didn’t even exist back then, such as cell phones, were owned by half of poor households in 2005 and are owned by a substantial majority of them today.

Capitalism has also made poor people’s lives far better by reducing infant and child mortality rates, not to mention maternal death rates during childbirth, and by extending life expectancies by decades.

Consider, too, the way capitalism’s engine of growth has enabled the planet to sustain almost 7 billion people, compared to 1 billion in 1800. As Deirdre McCloskey has noted, if you multiply the gains in consumption to the average human by the gain in life expectancy worldwide by 7 (for 7 billion as compared to 1 billion people), humanity as a whole is better off by a factor of around 120. That’s not 120 percent better off, but 120 times better off since 1800.

The competitive market process has also made education, art, and culture available to more and more people. Even the poorest of Americans, not to mention many of the global poor, have access through the Internet and TV to concerts, books, and works of art that were exclusively the province of the wealthy for centuries.

And in the wealthiest countries, the dynamics of capitalism have begun to change the very nature of work. Where once humans toiled for 14 hours per day at backbreaking outdoor labor, now an increasing number of us work inside in climate-controlled comfort. Our workday and workweek have shrunk thanks to the much higher value of labor that comes from working with productive capital. We spend a much smaller percentage of our lives working for pay, whether we’re rich or poor. And even with economic change, the incomes of the poor are much less variable, as they are not linked to the unpredictable changes in weather that are part and parcel of a predominantly agricultural economy long since disappeared.

Think of it this way: the fabulously wealthy kings of old had servants attending to their every need, but an impacted tooth would likely kill them. The poor in largely capitalist countries have access to a quality of medical care and a variety and quality of food that the ancient kings could only dream of.

Consider, too, that the working poor of London 100 years ago were, at best, able to split a pound of meat per week among all of their children, which were greater in number than the two or three of today. In addition, the whole family ate meat once a week on Sunday, the one day the man of the household was home for dinner. That was meat for a week.

Compare that to today, when we worry that poor Americans are too easily able to afford a meal with a quarter pound of meat in it every single day for less than an hour’s labor. Even if you think that capitalism has made poor people overweight, that’s a major accomplishment compared to the precapitalist norm of constant malnutrition and the struggle even 100 years ago for the working poor to get enough calories.

The reality is that the rich have always lived well historically, as for centuries they could commandeer human labor to attend to their every need. In a precapitalist world, the poor had no hope of upward mobility or of relief from the endless physical drudgery that barely kept them alive.

Today, the poor in capitalist countries live like kings, thanks mostly to the freeing of labor and the ability to accumulate capital that makes that labor more productive and enriches even the poorest. The falling cost of what were once luxuries and are now necessities, driven by the competitive market and its profit and loss signals, has brought labor-saving machines to the masses. When profit-seeking and innovation became acceptable behavior for the bourgeoisie, the horn of plenty brought forth its bounty, and even the poorest shared in that wealth.

Once people no longer needed permission to innovate, and once the value of new inventions was judged by the improvements they made to the lives of the masses in the form of profit and loss, the poor began to live lives of comfort and dignity.

These changes are not, as some would say, about technology. After all, the Soviets had great scientists but could not channel that knowledge into material comfort for their poor. And it’s not about natural resources, which is obvious today as resource-poor Hong Kong is among the richest countries in the world thanks to capitalism, while Venezuelan socialism has destroyed that resource-rich country.

Inventions only become innovations when the right institutions exist to make them improve the lives of the masses. That is what capitalism did and continues to do every single day. And that’s why capitalism has been so good for the poor.

Consider, finally, what happened when the Soviets decided to show the film version of The Grapes of Wrath as anticapitalist propaganda. In the novel and film, a poor American family is driven from their Depression-era home by the Dust Bowl. They get in their old car and make a horrifying journey in search of a better life in California. The Soviets had to stop showing the film after a short period because the Russian audiences were astonished that poor Americans were able to own a car.

Even anticapitalist propaganda can’t help but provide evidence that contradicts its own argument. The historical truth is clear: nothing has done more for the poor than capitalism.

Steven HorwitzSteven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Hayek’s Modern Family: Classical Liberalism and the Evolution of Social Institutions.

He is a member of the FEE Faculty Network.

Ideas, Not ‘Capital,’ Enriched the World by Deirdre N. McCloskey

Why are we so rich? Who are “we”? Have our riches corrupted us?

“The Bourgeois Era,” a series of three l-o-n-g books just completed  — thank God — answers:

  • first, in The Bourgeois Virtues: Ethics for an Age of Commerce (2006), that the commercial bourgeoisie — the middle class of traders, dealers, inventors, and managers — is on the whole, contrary to the conviction of the “clerisy” of artists and intellectuals after 1848, pretty good. Not bad.
  • second, in Bourgeois Dignity: Why Economics Can’t Explain the Modern World (2010), that the modern world was made not by the usual material causes, such as coal or thrift or capital or exports or imperialism or good property rights or even good science, all of which have been widespread in other cultures and at other times. It was caused by very many technical and some few institutional ideas among a uniquely revalued bourgeoisie — on a large scale at first peculiar to northwestern Europe, and indeed peculiar from the sixteenth century to the Low Countries;
  • and third, in Bourgeois Equality: How Ideas, Not Capital or Institutions, Enriched the World (2016), that a novel way of looking at the virtues and at bettering ideas arose in northwestern Europe from a novel liberty and dignity enjoyed by all commoners, among them the bourgeoisie, and from a startling revaluation starting in Holland by the society as a whole of the trading and betterment in which the bourgeoisie specialized.The revaluation, called “liberalism,” in turn derived not from some ancient superiority of the Europeans but from egalitarian accidents in their politics from 1517–1789. That is, what mattered were two levels of ideas — the ideas in the heads of entrepreneurs for the betterments themselves (the electric motor, the airplane, the stock market); and the ideas in the society at large about the businesspeople and their betterments (in a word, that liberalism). What were not causal were the conventional factors of accumulated capital and institutional change. They happened, but they were largely dependent on betterment and liberalism.

The upshot since 1800 has been a gigantic improvement for the poor, yielding equality of real comfort in health and housing, such as for many of your ancestors and mine, and a promise now being fulfilled of the same result worldwide — a Great Enrichment for even the poorest among us.

These are controversial claims. They are, you see, optimistic. Many of the left, such as my friend the economist and former finance minister of Greece, Yanis Varoufakis, or the French economist Thomas Piketty, and some on the right, such as my friend the American economist Tyler Cowen, believe we are doomed.

Yanis thinks that wealth is caused by imperial sums of capital sloshing around the world economy, and thinks in a Marxist and Keynesian way that the economy is like a balloon, puffed up by consumption, and about to leak. I think that the economy is like a machine making sausage, and if Greece or Europe want to get more wealth they need to make the machine work better — honoring enterprise, for example, and letting people work when they want to.

Piketty thinks that the rich get richer, always, and that the rest of us stagnate. I think it’s not true, even in his own statistics, and certainly not in the long run, and that what has mainly happened in the past two centuries is that the sausage machine has got tremendously more productive, benefiting mainly the poor.

Tyler thinks that improvements in the sausage machine are over. I think that if Tyler were so smart (and he is very smart), he would be rich, and anyway there is little evidence of technological stagnation, and anyway for at least the next century, the poor of the non-Western world will be catching up, enriching us all with their own betterments of the sausage machine.

In other words, I do not think we are doomed. I see over the next century a world enrichment both materially and spiritually that will give the wretched of the earth the lives of a present-day, bourgeois Dutch person.

For reasons I do not entirely understand, the clerisy after 1848 turned toward nationalism and socialism, and against liberalism. It came also to delight in an ever expanding list of pessimisms about the way we live now in our approximately liberal societies, from the lack of temperance among the poor to an excess of carbon dioxide in the atmosphere. Anti-liberal utopias believed to offset the pessimisms have been popular among the clerisy. Its pessimistic and utopian books have sold millions.

But the twentieth-century experiments of nationalism and socialism, of syndicalism in factories and central planning for investment, of proliferating regulation for imagined but not factually documented imperfections in the market, did not work. And most of the pessimisms about how we live now have proven to be mistaken.

It is a puzzle. Perhaps you yourself still believe in nationalism or socialism or proliferating regulation. And perhaps you are in the grip of pessimism about growth or consumerism or the environment or inequality. Please, for the good of the wretched of the earth, reconsider. The trilogy chronicles, explains, and defends what made us rich — the system we have had since 1800 or 1848, usually but misleadingly called modern “capitalism.”

The system should rather be called “technological and institutional betterment at a frenetic pace, tested by unforced exchange among all the parties involved.” Or “fantastically successful liberalism, in the old European sense, applied to trade and politics, as it was applied also to science and music and painting and literature.” The simplest version is “trade-tested progress.”

Many humans, in short, are now stunningly better off than their ancestors were in 1800. And the rest of humanity shows every sign of joining the enrichment. A crucial point is that the greatly enriched world cannot be explained in any deep way by the accumulation of capital, as economists from Adam Smith through Karl Marx to Varoufakis, Piketty, and Cowen have on the contrary believed, and as the very word “capitalism” seems to imply.

The word embodies a scientific mistake. Our riches did not come from piling brick on brick, or piling university degree on university degree, or bank balance on bank balance, but from piling idea on idea. The bricks, degrees, and bank balances — the capital accumulations — were of course necessary. But so were a labor force and liquid water and the arrow of time.

Oxygen is necessary for a fire. But it would be at least unhelpful to explain the Chicago Fire of October 8-10, 1871, by the presence of oxygen in the earth’s atmosphere. Better: a long dry spell, the city’s wooden buildings, a strong wind from the southwest, and, if you disdain Irish immigrants, Mrs. O’Leary’s cow.

The modern world cannot be explained, I show in the second volume, Bourgeois Dignity, by routine brick-piling, such as the Indian Ocean trade, English banking, canals, the British savings rate, the Atlantic slave trade, natural resources, the enclosure movement, the exploitation of workers in satanic mills, or the accumulation in European cities of capital, whether physical or human. Such routines are too common in world history and too feeble in quantitative oomph to explain the thirty- or hundredfold enrichment per person unique to the past two centuries.

Hear again that last, crucial, astonishing fact, discovered by economic historians over the past few decades. It is: in the two centuries after 1800 the trade-tested goods and services available to the average person in Sweden or Taiwan rose by a factor of 30 or 100. Not 100 percent, understand — a mere doubling — but in its highest estimate a factor of 100, nearly 10,000 percent, and at least a factor of 30, or 2,900 percent.

The Great Enrichment of the past two centuries has dwarfed any of the previous and temporary enrichments. Explaining it is the central scientific task of economics and economic history, and it matters for any other sort of social science or recent history. What explains it? The causes were not (to pick from the apparently inexhaustible list of materialist factors promoted by this or that economist or economic historian) coal, thrift, transport, high male wages, low female and child wages, surplus value, human capital, geography, railways, institutions, infrastructure, nationalism, the quickening of commerce, the late medieval run-up, Renaissance individualism, the First Divergence, the Black Death, American silver, the original accumulation of capital, piracy, empire, eugenic improvement, the mathematization of celestial mechanics, technical education, or a perfection of property rights.

Such conditions had been routine in a dozen of the leading organized societies of Eurasia, from ancient Egypt and China down to Tokugawa Japan and the Ottoman Empire, and not unknown in Meso-America and the Andes. Routines cannot account for the strangest secular event in human history, which began with bourgeois dignity in Holland after 1600, gathered up its tools for betterment in England after 1700, and burst on northwestern Europe and then the world after 1800.

The modern world was made by a slow-motion revolution in ethical convictions about virtues and vices, in particular by a much higher level than in earlier times of toleration for trade-tested progress — letting people make mutually advantageous deals, and even admiring them for doing so, and especially admiring them when Steve Jobs-like they imagine betterments.

Note: the crux was not psychology — Max Weber had claimed in 1905 that it was — but sociology. Toleration for free trade and honored betterment was advocated first by the bourgeoisie itself, then more consequentially by the clerisy, which for a century before 1848 admired economic liberty and bourgeois dignity, and in aid of the project was willing to pledge its life, fortune, and sacred honor.

After 1848 in places like the United States and Holland and Japan, the bulk of ordinary people came slowly to agree. By then, however much of the avant garde of the clerisy worldwide had turned decisively against the bourgeoisie, on the road to twentieth-century fascism and communism.

Yet in the luckier countries, such as Norway or Australia, the bourgeoisie was for the first time judged by many people to be acceptably honest, and was in fact acceptably honest, under new social and familial pressures. By 1900, and more so by 2000, the Bourgeois Revaluation had made most people in quite a few places, from Syracuse to Singapore, very rich and pretty good.

I have to admit that “my” explanation is embarrassingly, pathetically unoriginal. It is merely the economic and historical realization in actual economies and actual economic histories of eighteenth-century liberal thought. But that, after all, is just what the clerisy after 1848 so sadly mislaid, and what the subsequent history proved to be profoundly correct. Liberty and dignity for ordinary people made us rich, in every meaning of the word.

The change, the Bourgeois Revaluation, was the coming of a business-respecting civilization, an acceptance of the Bourgeois Deal: “Let me make money in the first act, and by the third act I will make you all rich.”

Much of the elite, and then also much of the non-elite of northwestern Europe and its offshoots, came to accept or even admire the values of trade and betterment. Or at the least the polity did not attempt to block such values, as it had done energetically in earlier times. Especially it did not do so in the new United States. Then likewise, the elites and then the common people in more of the world followed, including now, startlingly, China and India. They undertook to respect—or at least not to utterly despise and overtax and stupidly regulate—the bourgeoisie.

Why, then, the Bourgeois Revaluation that after made for trade-tested betterment, the Great Enrichment? The answer is the surprising, black-swan luck of northwestern Europe’s reaction to the turmoil of the early modern — the coincidence in northwestern Europe of successful Reading, Reformation, Revolt, and Revolution: “the Four Rs,” if you please. The dice were rolled by Gutenberg, Luther, Willem van Oranje, and Oliver Cromwell. By a lucky chance for England their payoffs were deposited in that formerly inconsequential nation in a pile late in the seventeenth century.

None of the Four Rs had deep English or European causes. All could have rolled the other way. They were bizarre and unpredictable. In 1400 or even in 1600 a canny observer would have bet on an industrial revolution and a great enrichment — if she could have imagined such freakish events — in technologically advanced China, or in the vigorous Ottoman Empire. Not in backward, quarrelsome Europe.

A result of Reading, Reformation, Revolt, and Revolution was a fifth R, a crucial Revaluation of the bourgeoisie, first in Holland and then in Britain. The Revaluation was part of an R-caused, egalitarian reappraisal of ordinary people. I retail here the evidence that hierarchy — as, for instance, in St. Paul’s and Martin Luther’s conviction that the political authorities that exist have been instituted by God — began slowly and partially to break down.

The cause of the bourgeois betterments, that is, was an economic liberation and a sociological dignifying of, say, a barber and wig-maker of Bolton, son of a tailor, messing about with spinning machines, who died in 1792 as Sir Richard Arkwright, possessed of one of the largest bourgeois fortunes in England. The Industrial Revolution and especially the Great Enrichment came from liberating commoners from compelled service to a hereditary elite, such as the noble lord in the castle, or compelled obedience to a state functionary, such as the economic planner in the city hall. And it came from according honor to the formerly despised of Bolton — or of Ōsaka, or of Lake Wobegon — commoners exercising their liberty to relocate a factory or invent airbrakes.

Not everyone accepted the Bourgeois Deal, even in the United States. There’s the worry: it’s not complete, and can be undermined by hostile attitudes and clumsy regulations. In Chicago you need a $300 business license to start a little repair service for sewing machines, but you can’t do it in your home because of zoning, arranged politically by big retailers. Likewise in Rotterdam, worse.

Antibourgeois attitudes survive even in bourgeois cities like London and New York and Milan, expressed around neo-aristocratic dinner tables and in neo-priestly editorial meetings. A journalist in Sweden noted recently that when the Swedish government recommended two centimeters of toothpaste on one’s brush no journalist complained:

[The] journalists . . . take great professional pride in treating with the utmost skepticism a press release or some new report from any commercial entity. And rightly so. But the big mystery is why similar output is treated differently just because it is from a government organization. It’s not hard to imagine the media’s response if Colgate put out a press release telling the general public to use at least two centimeters of toothpaste twice every day.

The bourgeoisie is far from ethically blameless. The newly tolerated bourgeoisie has regularly tried to set itself up as a new aristocracy to be protected by the state, as Adam Smith and Karl Marx predicted it would. And anyway even in the embourgeoisfying lands on the shores of the North Sea, the old hierarchy based on birth or clerical rank did not simply disappear on January 1, 1700.

Tales of pre- or antibourgeois life strangely dominated the high and low art of the Bourgeois Era. Flaubert’s and Hemingway’s novels, D’Annunzio’s and Eliot’s poetry, Eisenstein’s and Pasolini’s films, not to speak of a rich undergrowth of cowboy movies and spy novels, all celebrate peasant/proletariat or aristocratic values.

A hard coming we bourgeois have had of it. A unique liberalism was what freed the betterment of equals, starting in Holland in 1585, and in England and New England a century later. Betterment came largely out of a change in the ethical rhetoric of the economy, especially about the bourgeoisie and its projects.

You can see that “bourgeois” does not have to mean what conservatives and progressives mean by it, namely, “having a thoroughly corrupted human spirit.” The typical bourgeois was viewed by the Romantic, Scottish conservative Thomas Carlyle in 1843 as an atheist with “a deadened soul, seared with the brute Idolatry of Sense, to whom going to Hell is equivalent to not making money.”

Or from the other side, in 1996 Charles Sellers, the influential leftist historian of the United States, viewed the new respect for the bourgeoisie in America as a plague that would, between 1815 and 1846, “wrench a commodified humanity to relentless competitive effort and poison the more affective and altruistic relations of social reproduction that outweigh material accumulation for most human beings.”

Contrary to Carlyle and Sellers, however, bourgeois life is in fact mainly cooperative and altruistic, and when competitive it is good for the poorest among us. We should have more of it. The Bourgeois Deal does not imply, however, that one needs to be fond of the vice of greed, or needs to think that greed suffices for an economic ethic. Such a Machiavellian theory, “greed is good,” has undermined ethical thinking about the Bourgeois Era. It has especially done so during the past three decades in smart-aleck hangouts such as Wall Street or the Department of Economics.

Prudence is a great virtue among the seven principal virtues. But greed is the sin of prudence only — namely, the admitted virtue of prudence when it is not balanced by the other six, becoming therefore a vice. That is the central point of Deirdre McCloskey, The Bourgeois Virtues, of 2006, or for that matter of Adam Smith, The Theory of Moral Sentiments, of 1759 (so original and up-to-date is McCloskey).

Nor has the Bourgeois Era led in fact to a poisoning of the virtues. In a collection of mini-essays asking “Does the Free Market Corrode Moral Character?” the political theorist Michael Walzer replied “Of course it does.” But then he wisely adds that any social system corrodes one or another virtue. That the Bourgeois Era surely has tempted people into thinking that greed is good, wrote Walzer, “isn’t itself an argument against the free market. Think about the ways democratic politics also corrodes moral character. Competition for political power puts people under great pressure . . . to shout lies at public meeting, to make promises they can’t keep.”

Or think about the ways even a mild socialism puts people under great pressure to commit the sins of envy or state-enforced greed or violence or environmental imprudence. Or think about the ways the alleged affective and altruistic relations of social reproduction in America before the alleged commercial revolution put people under great pressure to obey their husbands in all things and to hang troublesome Quakers and Anabaptists.

That is to say, any social system, if it is not to dissolve into a war of all against all, needs ethics internalized by its participants. It must have some device — preaching, movies, the press, child raising, the state — to slow down the corrosion of moral character, at any rate by the standard the society sets. The Bourgeois Era has set a higher social standard than others, abolishing slavery and giving votes to women and the poor.

For further progress Walzer the communitarian puts his trust in an old conservative argument, an ethical education arising from good-intentioned laws. One might doubt that a state strong enough to enforce such laws would remain uncorrupted for long, at any rate outside of northern Europe. In any case, contrary to a common opinion since 1848 the arrival of a bourgeois, business-respecting civilization did not corrupt the human spirit, despite temptations. Mostly in fact it elevated the human spirit.

Walzer is right to complain that “the arrogance of the economic elite these last few decades has been astonishing.” So it has. But the arrogance comes from the smartaleck theory that greed is good, not from the moralized economy of trade and betterment that Smith and Mill and later economists saw around them, and which continues even now to spread.

The Bourgeois Era did not thrust aside, as Sellers the historian elsewhere claims in rhapsodizing about the world we have lost, lives “of enduring human values of family, trust, cooperation, love, and equality.” Good lives such as these can be and actually are lived on a gigantic scale in the modern, bourgeois town. In Alan Paton’s Cry, the Beloved Country, John Kumalo, from a village in Natal, and now a big man in Johannesburg, says, “I do not say we are free here.” A black man under apartheid in South Africa in 1948 could hardly say so. “But at least I am free of the chief. At least I am free of an old and ignorant man.”

The Revaluation, in short, came out of a rhetoric — what the Dutch economist Arjo Klamer calls the “conversation” — that would, and will, enrich the world. We are not doomed. If we have a sensible and fact-based conversation about economics and economic history and politics we will do pretty well, for Rio and Rotterdam and the rest.

Deirdre N. McCloskeyDeirdre N. McCloskey

Deirdre Nansen McCloskey taught at the University of Illinois at Chicago from 2000 to 2015 in economics, history, English, and communication. A well-known economist and historian and rhetorician, she has written 17 books and around 400 scholarly pieces on topics ranging from technical economics and statistical theory to transgender advocacy and the ethics of the bourgeois virtues. Her latest book, out in January 2016 from the University of Chicago Press—Bourgeois Equality: How Ideas, Not Capital or Institutions, Enriched the World—argues for an “ideational” explanation for the Great Enrichment 1800 to the present.

A Scientific Consensus on What Now? by Robert P. Murphy

Authority versus Science in the Climate Change Debate.

When it comes to the climate change debate, many of the loudest voices are confidently making assertions that are not backed up by the actual evidence — and in this respect, they are behaving very unscientifically.

One obvious sign that many people in the climate change debate are appealing to emotions rather than facts is their reliance on pejorative terminology. For example, rather than make an informative statement that they support subsidies for wind and solar, and taxes on coal and oil, they may instead say they support “clean energy” while their opponents favor “dirty energy.”

The coup de grâce, of course, occurs when partisans in the debate refer to their opponents as “climate deniers.” This is a nonsensical slur that would have impressed Orwell. Obviously, nobody denies climate. Furthermore, nobody denies that the climate is changing. And, when it comes to the serious debate among published climate scientists, people on both sides agree that human activities are contributing to warmer temperatures; the dispute is simply overhow much. (Those who think the change is mild have embraced the label “lukewarmers.”)

To label critics of a carbon tax or EPA regulations on power plants as “climate deniers” is utterly destructive of rational inquiry and tries to link legitimate skepticism to Holocaust denial. Those who use this term without irony demonstrate that they have no interest in scientific discovery.

Related to this lack of nuance, and the appeal to an exaggerated consensus, is the oft-repeated claim that “97 percent of climate scientists agree” on the state of human-generated climate change. Physicist-turned-economist David Friedman (among others) has investigated the methods used to generate such claims, and finds that they are seriously lacking.

Using the very data (on abstracts from published papers) that forms the basis of these headline announcements, Friedman reckons that more like 1.6 percent of the surveyed papers explicitly endorse humans as the main cause of global warming since the 1800s. Friedman further argues that this confusion — where the actual findings of the paper ended up being misinterpreted by the media — appears to have been deliberately produced by the survey’s authors.

“Hottest Year on Record” and “the Pause”

A January 2016 New York Times article epitomizes the advocacy disguised as reporting in the climate change debate. The very title lets you know that a serious case of scientism is coming, for it announces, “2015 Was Hottest Year in Historical Record, Scientists Say.”

Now, we must inquire, what is the purpose of adding “Scientists Say” at the end? Does any reader think that the Times would be quoting plumbers or accountants on whether 2015 was the hottest year on record? The obvious purpose is to contrast what scientists say about global warming with what thosenonscientist deniers are saying. The article goes on to let us know exactly what “the scientists” think about global warming and manmade activities:

Scientists started predicting a global temperature record months ago, in part because an El Niño weather pattern, one of the largest in a century, is releasing an immense amount of heat from the Pacific Ocean into the atmosphere. But the bulk of the record-setting heat, they say, is a consequence of the long-term planetary warming caused by human emissions of greenhouse gases.

“The whole system is warming up, relentlessly,” said Gerald A. Meehl, a scientist at the National Center for Atmospheric Research in Boulder, Colo.

It will take a few more years to know for certain, but the back-to-back records of 2014 and 2015 may have put the world back onto a trajectory of rapid global warming, after a period of relatively slow warming dating to the last powerful El Niño, in 1998.

Politicians attempting to claim that greenhouse gases are not a problem seized on that slow period to argue that “global warming stopped in 1998,” with these claims and similar statements reappearing recently on the Republican presidential campaign trail.

Statistical analysis suggested all along that the claims were false, and that the slowdown was, at most, a minor blip in an inexorable trend, perhaps caused by a temporary increase in the absorption of heat by the Pacific Ocean.

This excerpt is quite fascinating. We have something reported as undeniable fact when it actually relies on assumptions of what might happen in the future (“may have put the world back onto a trajectory of rapid global warming”) and offers conjectures to explain why the measured warming suddenly slowed down (“perhaps caused by a temporary increase in the absorption of heat”).

The “statistical analysis” did not establish that the critics’ claims were false. It is undeniably true that the official NASA GISS records showed, for example, that the average annual global temperature in 2008 was lower than the annual temperature in 1998, and that’s why people at the time were saying, “There has been no global warming in the last ten years.”

Here is a NASA-affiliated scientist arguing that such claims are misleading, and perhaps they were, but it is similarly misleading to turn around and claim that the pause didn’t exist.

If you asked a bunch of Americans whether they gained weight over the last 10 years, their natural interpretation of that question would be, “Do I weigh morenow than I weighed 10 years ago?” They wouldn’t think it involved construction of moving averages since birth. In that sense, the people referring to the pause were not acting dishonestly; they were pointing out to the public a fact about the temperature record that would definitely be news to them, in light of the rhetoric of runaway climate change.

However, the more substantive point here is that the popular climate models predicted much more warming than has in fact occurred. In other words, the question isn’t whether the 2000s were warmer than the 1990s. Rather, the issue is given how much concentrations of greenhouse gases have risen, is the actualtemperature trend consistent with the predicted temperature trend?

To answer this, consider a December 2015 Cato Institute working paper from two climate scientists, Pat Michaels and Paul Knappenberger: “Climate Models and Climate Reality: A Closer Look at a Luke warming World.” They avoid the accusation of cherry-picking by running through trend lengths of varying durations, and they compare 108 model runs with the various data sets on observed temperatures. They conclude, “During all periods from 10 years (2006–2015) to 65 (1951–2015) years in length, the observed temperature trend lies in the lower half of the collection of climate model simulations, and for several periods it lies very close (or even below) the 2.5th percentile of all the model runs.”

Thus we see that the critics arguing about the model projections aren’t simply picking the very warm 1998 as a starting point in order to game the results. The standard models produced warming projections well above what has happened in reality, and for some periods the observed warming was so low (relative to the prediction) that there is less than a 2.5 percent chance that this could be explained by natural volatility. This is the sense in which the current suite of climate models is on the verge of being “rejected” in the statistician’s sense.

To be sure, I am not a climate scientist, and others would no doubt dispute the interpretation of the data that Michaels and Knappenberger give. My point is to show how utterly misleading the New York Times piece is when it leads readers to believe that “scientists” were never troubled by lackluster warming and that only politicians were trying to confuse the public on the matter.

Climate Economists Don’t Believe Their Models?

Finally, consider a December 2015 Vox piece with the title, “Economists Agree: Economic Models Underestimate Climate Change.” Furthermore, the URL for this piece contains the phrase “economists-climate-consensus.” We see the same appeal to authority here as in the natural sciences when it comes to climate policy.

The Vox article refers to a survey of 365 economists who had published in the field of climate economics. Here is the takeaway: “Like scientists, economists agree that climate change is a serious threat and that immediate action is needed to address it” (emphasis added).

Yet, in several respects, the survey reveals facts at odds with the alarmist rhetoric the public hears on the issue. For example, one question asked, “During what time period do you believe the net effects of climate change will first have a negative impact on the global economy?” With President Obama and other important officials discussing the ravages of climate change (allegedly) before our very eyes, one might have expected the vast majority of the survey respondents to say that climate change is having a negative impact right now.

In fact, only 41 percent said that. Twenty-two percent thought the negative impact would be felt by 2025, while an additional 26 percent would only say climate change would have net negative economic effects by 2050. Would anyone have expected that result when reading Vox’s summary that immediate action is needed to address climate change?

To be clear, the Vox statement is not a lie; it can be justified by the responses on two of the other questions. Yet the actual views of these economists are much more nuanced than the pithy summary statements suggest.

Authority versus Science

On this particular survey, I personally encountered the height of absurdity in the context of scientism and appeal to authority. For years, in my capacity as an economist for the Institute for Energy Research, I have pointed out that the published results in the United Nations’ official “consensus” documents do not justify even a standard goal of limiting global warming to 2 degrees Celsius, let alone the over-the-top rhetoric of people like Paul Krugman.

In order to push back against my claim, economist Noah Smith pointed to the survey discussed earlier, proudly declaring, “Apparently most climate economists don’t believe their own models.” Thus we have reached the point where partisans on one side of a policy debate rely on surveys of what “the experts say,” in order to knock down the other side who rely on the published results of those very experts.

This is the epitome of elevating appeals to scientific authority over the underlying science itself.

In the climate change debate, legitimate disputes are transformed into a battle between Noble Seekers of Truth versus Unscientific Liars Who Hate Humanity. Time and again, references to “the consensus” are greatly exaggerated, while people pointing out enormous problems with the case for policy action are dismissed as “deniers.”

Robert P. MurphyRobert P. Murphy

Robert P. Murphy is research assistant professor with the Free Market Institute at Texas Tech University.

RELATED ARTICLE: College Professor Advocating Climate Change May Have Mismanaged Millions in Tax Dollars

Why Is Economics “the Dismal Science”? The Reason May Surprise You! by David R. Henderson

In an otherwise excellent post responding to Noah Smith about economic growth, my Hoover colleague and friend John Cochrane makes a mistake in the history of economic thought.

John writes:

They do not call us the “dismal science” because we think the current world is close to the best of all possible ones, and all there is to do is haggle over technical amendments to rule 134.532 subparagraph a and hope to squeeze out 0.001% more growth.

Usually, the role of economists is to see the great possibilities that every day experience does not reveal. (“Dismal” only refers to the fact that good economics respects budget constraints.)

Actually, that’s not what dismal refers to. David M. Levy and Sandra J. Peart write:

Everyone knows that economics is the dismal science. And almost everyone knows that it was given this description by Thomas Carlyle, who was inspired to coin the phrase by T. R. Malthus’s gloomy prediction that population would always grow faster than food, dooming mankind to unending poverty and hardship.

While this story is well-known, it is also wrong, so wrong that it is hard to imagine a story that is farther from the truth. At the most trivial level, Carlyle’s target was not Malthus, but economists such as John Stuart Mill, who argued that it was institutions, not race, that explained why some nations were rich and others poor.

Carlyle attacked Mill, not for supporting Malthus’s predictions about the dire consequences of population growth, but for supporting the emancipation of slaves. It was this fact–that economics assumed that people were basically all the same, and thus all entitled to liberty–that led Carlyle to label economics “the dismal science.”

They go on to write:

Carlyle disagreed with the conclusion that slavery was wrong because he disagreed with the assumption that under the skin, people are all the same. He argued that blacks were subhumans (“two-legged cattle”), who needed the tutelage of whites wielding the “beneficent whip” if they were to contribute to the good of society.

In a speech at Susquehanna University earlier this year, I quoted this and pointed out that it was the classical economists, John Stuart Mill, et al, who believed that black lives matter.

This post first appeared at Econlog, the blog of the Library of Economics and Liberty. © Liberty Fund, Inc., reprinted with permission.


David Henderson

David Henderson is a research fellow with the Hoover Institution and an economics professor at the Graduate School of Business and Public Policy, Naval Postgraduate School, Monterey, California. He is editor of The Concise Encyclopedia of Economics (Liberty Fund) and blogs at econlib.org.

Could Hillary Really “Restore” the Middle Class? by Donald J. Boudreaux

Eduardo Porter opens his column today by asking “Could President Hillary Clinton restore the American middle class?” (“Sizing Up Hillary Clinton’s Plans to Help the Middle Class”).

Mr. Porter illegitimately presents as an established fact a proposition that is anything but. It’s true that between 1967 and 2009 the percent of American families with annual incomes between $25,000 and $75,000 (in 2009 dollars) fell from 62 to 39 – a fact that, standing alone, might be interpreted as evidence that the middle class is disappearing.

Yet this fact does not stand alone, for it’s also true that the percent of families with annual incomes lower than $25,000 also fell (from 22 to 18) while the percent of families with annual incomes of $75,000 and higher rose significantly – from 16 to 43.*

So given these Census Bureau data – which are strong evidence that America’s middle class, if disappearing, is doing so by moving into the upper classes – to ask if President Hillary Clinton could restore the American middle class is to ask if she will make the bulk of today’s prosperous families poorer rather than richer.

This post first appeared at CafeHayek.

Donald Boudreaux

Donald Boudreaux is a professor of economics at George Mason University, a former FEE president, and the author of Hypocrites and Half-Wits.

Neoliberalism: Making a Boogeyman Out of a Buzzword by Max Borders

After Salon.com stopped being interesting, they needed a way to drive traffic. Competition for eyeballs is tough, after all. In the dog-eat-dog world of attracting eyeballs, you’ve got to find clever ways to pull in new readers.

One way to drive traffic is to poke people you know disagree with you. And by poking, I mean turning them into a Voodoo Doll.

This variation on beating up a Straw Man has the benefit of the Internet’s sharing magic. That is, if you pick on some group they will feel it. Then they will turn around and express their outrage by sharing your stuff! Voila: instant Internet gold.

In making Voodoo Dolls, you don’t always have to pick on a specific person. You can go for a worldview. Salon has given libertarianism a lot of flak, of course. But now they’re going for an even bigger boogeyman, because the idea is to paint as many people as you can with the same tarbrush.

What better place to go for a big, sweeping label than the academy?

Here’s UC-Berkeley political science professor Wendy Brown talking “neoliberalism” in a Salon interview.

And how do you define neoliberalism? It’s not uncommon for me to experience people I’d consider neoliberals telling me the term is meaningless.

I think most Salon readers would know neoliberalism as that radical free-marketeering that comes to us in the ‘70s and ‘80s, with the Reagan-Thatcher revolution being the real marker of that turn in Euro-Atlantic world. It means the dismantling of publicly owned industry and deregulation of capital, especially finance capital; the elimination of public provisions and the idea of public goods; and the most basic submission of everything to markets and to unregulated markets.

So free enterprise is its clarion call, and even though it requires a lot of state intervention and state support, the idea that goes with it is usually also minimal state intervention in markets. Even if states are needed to prop or support or sometimes bail out markets, they shouldn’t get into the middle of them and redistribute [wealth]. That’s all true. That’s certainly part of what neoliberalism is.

Okay, let’s see if we can make heads or tails of this magician’s patter.

Start with Professor Brown’s concern that people have criticized the term neoliberalism as being meaningless. This doctrine, Brown says, “requires a lot of state intervention and state support, the idea that goes with it is usually also minimal state intervention in markets.”

Huh? If neoliberalism isn’t exactly libertarianism or anarcho-capitalism — because these doctrines certainly do not include or require state intervention and support of markets — then we might say she’s talking about cronyism. And certainly if someone were to build a doctrine around cronyism, that would not be meaningless.

It turns out such a doctrine does exist. But it’s not neoliberalism; it’s corporatism — and it’s a progressivist ideology.

According to Nobel laureate Edmund S. Phelps, quoted in the Freeman:

The managerial state has assumed responsibility for looking after everything from the incomes of the middle class to the profitability of large corporations to industrial advancement. This system . . . is . . . an economic order that harks back to Bismarck in the late nineteenth century and Mussolini in the twentieth: corporatism.

Phelps says,

In various ways, corporatism chokes off the dynamism that makes for engaging work, faster economic growth, and greater opportunity and inclusiveness. It maintains lethargic, wasteful, unproductive, and well-connected firms at the expense of dynamic newcomers and outsiders, and favors declared goals such as industrialization, economic development, and national greatness over individuals’ economic freedom and responsibility.

Today, airlines, auto manufacturers, agricultural companies, media, investment banks, hedge funds, and much more has [sic] at some point been deemed too important to weather the free market on its own, receiving a helping hand from government in the name of the “public good.”

But where does this idea come from? Contra Brown, it’s not from the “free marketeers”. Economist Thayer Watkins says:

In the last half of the 19th century people of the working class in Europe were beginning to show interest in the ideas of socialism and syndicalism. Some members of the intelligentsia, particularly the Catholic intelligentsia, decided to formulate an alternative to socialism which would emphasize social justice without the radical solution of the abolition of private property.

The result was called Corporatism. The name had nothing to do with the notion of a business corporation except that both words are derived from the Latin word for body, corpus.

To be fair, Brown might protest, arguing that she would subsidize, cartelize, and manage the right industries, such as finance. At least she laments the liberalization of these industries, citing Thatcher as an example of neoliberal excess, despite what a basket case Britain had been under prior governments.

So which industries would she leave private and which “require a lot of state intervention”? And what sort of magic makes any such scheme immune to rent-seeking and capture?

It appears state support of business originated among certain less-communist advocates of social justice. But surely this is not something the more moderate progressives had in mind.

After all, says Brown, “What’s more, if those of us who oppose neoliberalism misinterpret it as simply another word for capitalism, we make the job of fighting it even more difficult. Franklin Delano Roosevelt was a capitalist, after all. But a neoliberal, he most certainly was not.”

Libertarian philosopher Jason Brennan says it’s time to point fingers and name names. In a rare polemic called “Dear Left: Corporatism is Your Fault” he writes,

America is suffering from rampant, run-away corporatism and crony capitalism. We are increasingly a plutocracy in which government serves the interests of elite financiers and CEOs at the expense of everyone else.

You know this and you complain loudly about it. But the problem is your fault. You caused this state of affairs. Stop it.

But the moderate left didn’t want radical socialism. They just wanted regulatory agencies to rein in the excesses of the market. They wanted the government to subsidize or own areas that ought to be considered public goods, like healthcare, transportation, education, and the environment. But good intentions are not enough, writes Brennan.

We told you this would happen, but you wouldn’t listen. You complain, rightly, that regulatory agencies are controlled by the very corporations they are supposed to constrain. Well, yeah, we told you that would happen. When you create power—and you people love to create power—the unscrupulous seek to capture that power for their personal benefit. Time and time again, they succeed. We told you that would happen, and we gave you an accurate account of how it would happen.

You complain, perhaps rightly, that corporations are just too big. Well, yeah, we told you that would happen. When you create complicated tax codes, complicated regulatory regimes, and complicated licensing rules, these regulations naturally select for larger and larger corporations. We told you that would happen. Of course, these increasingly large corporations then capture these rules, codes, and regulations to disadvantage their competitors and exploit the rest of us. We told you that would happen.

Brennan was probably a little upset when he wrote this, but fairly so. People like Wendy Brown have been trying to emblazon corporatism on the tunics of free marketeers and liberalizers for a while now. And they’re generally pontificating from the academy, rather than from the brothels of K St. in Washington, or Venezuela’s Ministry of Planning and Finance.

No one who calls herself a political science professor should have earned her letters without having read public choice theory. No, it’s time to admit that all progressive attempts to stitch together old scraps of socialism with markets will create perverse effects and corruption of one form or another.

Maybe Prof. Brown is okay with “corporatizing” some industries while leaving others in private hands, a la FDR. Hers seems to be an attempt to synthesize the heart of Marx with the will of the people. She says:

“Demos kratia” — “people rule” — is really the term that, however differently it’s been interpreted over different variations of democracy and different centuries, is one that we all cherish on some level. Demos is important because it’s the body, it’s the people, that we imagine are in control of the basic conditions and laws that govern our lives.

Ah, yes “the body,” the corpus. Haven’t we heard that one before? We’re supposed to cherish democracy, because, well, it’s as American as apple pie. Any more reflection would require admitting that the “demos” disagrees about stuff. And that’s a slippery slope to individualism and recognizing the need for tolerance and personal autonomy. This is the fact of pluralism that even the liberal philosopher John Rawls starts with.

Whenever you hear the world neoliberalism, be wary. It could be completely meaningless filler, but it’s always as squishy as silly putty. It’s a label that’s designed to demonize those who would never support it — a word to be accompanied by a sneer. It is a means of defining oneself as against something — preferably a nice soft Straw Man — rather than doing the hard business of coming out ideologically and defending your ideas.

When you realize that accepting degrees of state intervention is a problem of degree and not of kind, it becomes clear the Wendy Browns have nowhere to run but to nebulous concepts like “demos.” That is because between corporatism and communism there is no magical third way, only shades of state coercion, justified by a flimsy majoritarian facade. The choice between nationalized or regulated industries is binary, so the ideological choice set is really only between communism and corporatism. But communism screwed things up. Corporatism screws things up. All the variations screw things up because each permutation involves power and business forming unholy alliances.

People like Wendy Brown and her Salon interviewer Elias Isquith aren’t stupid. And like most people, they have good intentions. They are committed to a particular theory of angels. Demos, that golden calf, is the tired old notion that if we could just blur the peculiarities, individuality, and desires of 300 million people into a single prayer and send it up through the voting booth, what will come out the other side — in Washington, D.C. — is a kind of secular salvation. But this sort of thinking turns on hypostatization, that timeless fallacy of ambiguity that seduces people into collectivism.

We have to look them squarely in the face and say: “You caused this state of affairs. Stop it.”


Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

What Can the Government Steal? Anything It Pays For! by Daniel Bier

“…Nor shall private property be taken for public use, without just compensation.” – Fifth Amendment to the U.S. Constitution 

On Monday, I wrote about the Supreme Court’s decision in the case of Horne v. USDA, in which the Court ruled almost unanimously against the government’s attempt to confiscate a third of California raisin farmers’ crops without paying them a dime for it.

The confiscation was part of an absurd FDR-era program meant to increase the price of food crops by restricting the supply; the government would then sell or give away the raisins to foreign countries or other groups.

Overall, this ruling was a big win for property rights (or, at least, not the huge loss it could have been).

But there’s one issue that’s been overlooked here, and it relates to the Court’s previous decision in Kelo v. City of New London, the eminent domain case that also just turned 10 horrible years old yesterday.

In Horne, eight justices concluded that physically taking the farmers’ raisins and carting them away in trucks was, in fact, a “taking” under the Fifth Amendment that requires “just compensation.”

That sounds like common sense, but the Ninth Circuit Court of Appeals had ruled that the seizure wasn’t a taking that required compensation because, in their view, the Fifth Amendment gives less protection to “personal property” (i.e., stuff, like raisins or cars) than to “real property” (i.e., land).

The Court thankfully rejected this dangerous and illogical premise.

But while eight justices agreed on the basic question of the taking, only five agreed on the matter of just compensation.

The majority concluded that the government had to pay the farmers the current market value of the crops they wanted to take, which is standard procedure in a takings case (like when the government wants to take your home to build a road).

Justices Breyer (joined by Ginsburg and Kagan) wrote a partial dissent, arguing the federal government’s claim that the question of how much the farmers were owed should be sent back to the lower court to calculate what the farmers were owed.

Their curious reasoning was that, since the government was distorting the market and pushing up the market price of raisins, they should be able to subtract the value the farmers were getting from the artificially inflated price from the value of the raisins that were taken. The government argued that the farmers would actually end up getting more value than was taken from them, under this calculation.

Chief Justice Roberts, writing for the majority, derided this argument: “The best defense may be a good offense, but the Government cites no support for its hypothetical-based approach.”

But the most interesting part of this subplot came from Justice Thomas. Thomas fully agreed with Roberts’ majority opinion, but he wrote his own a one-page concurrence on the question of how to calculate “just compensation,” and it went right at the heart of Kelo.

In Kelo, a bare majority of the Court ruled that the government could seize people’s homes and give them to private developers, on the grounds that the government expected more taxes from the new development.

Marc Scribner explains how the Court managed to dilute the Fifth Amendment’s “public use” requirement into a “public purpose” excuse that allows the government to take property for almost any reason it can dream up.

Thomas’s concurrence disputes Breyer’s argument about calculating “just compensation” by pointing out that, had Kelo had been correctly decided, the government wouldn’t be allowed to take the farmers’ crops at all — even if it paid for them.

Thomas wrote (emphasis mine),

The Takings Clause prohibits the government from taking private property except “for public use,” even when it offers “just compensation.”

And quoting his dissent in Kelo:

That requirement, as originally understood, imposes a meaningful constraint on the power of the state — ”the government may take property only if it actually uses or gives the public a legal right to use the property.”

It is far from clear that the Raisin Administrative Committee’s conduct meets that standard. It takes the raisins of citizens and, among other things, gives them away or sells them to exporters, foreign importers, and foreign governments.

To the extent that the Committee is not taking the raisins “for public use,” having the Court of Appeals calculate “just compensation” in this case would be a fruitless exercise.

Unfortunately, Chief Justice Roberts is already writing as though the “public use” requirement was a dead letter, writing at one point in his opinion: “The Government correctly points out that a taking does not violate the Fifth Amendment unless there is no just compensation.”

But that isn’t true. A taking violates the Fifth Amendment, first and foremost, if it is not taken for “public use.” And confiscating raisins and giving them to foreign governments in order to keep the price of raisins in the United States artificially high does not, in any sane world, meet that standard.

What Thomas didn’t say, but clearly implied, was that the Court should have struck down the raisin-stealing scheme entirely, rather than just forcing the government pay for the crops it takes.

The Horne decision was good news, but it didn’t go far enough by actually imposing a meaningful limit on what counts as “public use.” The Court could have done that in this case, by overturning Kelo or at least adding somelimitations about what governments can lawfully take private property for.

Happily, Justice Thomas isn’t throwing in the towel on Kelo, and Justice Scalia has predicted that the decision will eventually be overturned.

So can the government still take your property for no good reason? Yes, for now. But at least they have to pay for it.

That’s not nothing. And for raisin farmers in California, it’s a whole lot.


Daniel Bier

Daniel Bier is the editor of Anything Peaceful. He writes on issues relating to science, civil liberties, and economic freedom.

Against Eco-pessimism: Half a Century of False Bad News by Matt Ridley

Pope Francis’s new encyclical on the environment (Laudato Sii) warns of the coming environmental catastrophe (“unprecedented destruction of ecosystems, with serious consequences for all of us”).  It’s the latest entry in a long literary tradition of environmental doomsday warnings.

In contrast, Matt Ridley, bestselling author of GenomeThe Agile Gene, and The Rational Optimist, who also received the 2012 Julian Simon Memorial Award from the Competitive Enterprise Institute, says this outlook has proven wrong time again. This is the full text of his acceptance speech. Video is embedded below.

It is now 32 years, nearly a third of a century, since Julian Simon nailed his theses to the door of the eco-pessimist church by publishing his famous article in Science magazine: “Resources, Population, Environment: An Oversupply of False Bad News.”

It is also 40 years since The Limits to Growth and 50 years since Silent Spring, plenty long enough to reflect on whether the world has conformed to Malthusian pessimism or Simonian optimism.

Before I go on, I want to remind you just how viciously Simon was attacked for saying that he thought the bad news was being exaggerated and the good news downplayed.

Verbally at least Simon’s treatment was every bit as rough as Martin Luther’s. Simon was called an imbecile, a moron, silly, ignorant, a flat-earther, a member of the far right, a Marxist.

“Could the editors have found someone to review Simon’s manuscript who had to take off his shoes to count to 20?” said Paul Ehrlich.

Erhlich together with John Holdren then launched a blistering critique, accusing Simon of lying about electricity prices having fallen. It turned out they were basing their criticism on a typo in a table, as Simon discovered by calling the table’s author. To which Ehrlich replied: “what scientist would phone the author of a standard source to make sure there were no typos in a series of numbers?”

Answer: one who likes to get his facts right.

Yet for all the invective, his critics have never laid a glove on Julian Simon then or later. I cannot think of a single significant fact, data point or even prediction where he was eventually proved badly wrong. There may be a few trivia that went wrong, but the big things are all right. Read that 1980 article again today and you will see what I mean.

I want to draw a few lessons from Julian Simon’s battle with the Malthusian minotaur, and from my own foolhardy decision to follow in his footsteps – and those of Bjorn Lomborg, Ron Bailey, Indur Goklany, Ian Murray, Myron Ebell and others – into the labyrinth a couple of decades later.

Consider the words of the publisher’s summary of The Limits to Growth: “Will this be the world that your grandchildren will thank you for? A world where industrial production has sunk to zero. Where population has suffered a catastrophic decline. Where the air, sea, and land are polluted beyond redemption. Where civilization is a distant memory. This is the world that the computer forecasts.”

Again and again Simon was right and his critics were wrong.

Would it not be nice if just one of those people who called him names piped up and admitted it? We optimists have won every intellectual argument and yet we have made no difference at all. My daughter’s textbooks trot out the same old Malthusian dirge as mine did.

What makes it so hard to get the message across?

I think it boils down to five adjectives: ahistorical, finite, static, vested and complacent. The eco-pessimist view ignores history, misunderstands finiteness, thinks statically, has a vested interest in doom and is complacent about innovation.

People have very short memories. They are not just ignoring, but unaware of, the poor track record of eco-pessimists. For me, the fact that each of the scares I mentioned above was taken very seriously at the time, attracting the solemn endorsement of the great and the good, should prompt real skepticism about global warming claims today.

That’s what motivated me to start asking to see the actual evidence about climate change. When I did so I could not find one piece of data – as opposed to a model – that shows either unprecedented change or change is that is anywhere close to causing real harm.

Yet when I made this point to a climate scientist recently, he promptly and cheerily said that “the fact that people have been wrong before does not make them wrong this time,” as if this somehow settled the matter for good.

Second, it is enormously hard for people to grasp Simon’s argument that “Incredible as it may seem at first, the term ‘finite’ is not only inappropriate but downright misleading in the context of natural resources.”

He went on: “Because we find new lodes, invent better production methods and discover new substitutes, the ultimate constraint upon our capacity to enjoy unlimited raw materials at acceptable prices is knowledge.” This is a profoundly counterintuitive point.

Yet was there ever a better demonstration of this truth than the shale gas revolution? Shale gas was always there; but what made it a resource, as opposed to not a resource, was knowledge – the practical know-how developed by George Mitchell in Texas. This has transformed the energy picture of the world.

Besides, as I have noted elsewhere, it’s the renewable – infinite – resources that have a habit of running out: whales, white pine forests, buffalo. It’s a startling fact, but no non-renewable resource has yet come close to exhaustion, whereas lots of renewable ones have.

And by the way, have you noticed something about fossil fuels – we are the only creatures that use them. What this means is that when you use oil, coal or gas, you are not competing with other species. When you use timber, or crops or tide, or hydro or even wind, you are.

There is absolutely no doubt that the world’s policy of encouraging the use of bio-energy, whether in the form of timber or ethanol, is bad for wildlife – it competes with wildlife for land, or wood or food.

Imagine a world in which we relied on crops and wood for all our energy and then along comes somebody and says here’s this stuff underground that we can use instead, so we don’t have to steal the biosphere’s lunch.

Imagine no more. That’s precisely what did happen in the industrial revolution.

Third, the Malthusian view is fundamentally static. Julian Simon’s view is fundamentally dynamic. Again and again when I argue with greens I find that they simply do not grasp the reflexive nature of the world, the way in which prices cause the substitution of resources or the dynamic properties of ecosystems – the word equilibrium has no place in ecology.

Take malaria. The eco-pessimists insisted until recently that malaria must get worse in a warming 21st century world. But, as Paul Reiter kept telling them to no avail, this is nonsense. Malaria disappeared from North America, Russia and Europe and retreated dramatically in South America, Asia and Africa in the twentieth century even as the world warmed.

That’s not because the world got less congenial to mosquitoes. It’s because we moved indoors and drained the swamps and used DDT and malaria medications and so on. Human beings are a moving target. They adapt.

But, my fourth point, another reason Simon’s argument fell on stony ground is that so many people had and have a vested interest in doom. Though they hate to admit it, the environmental movement and the scientific community are vigorous, healthy, competitive, cut-throat, free markets in which corporate leviathans compete for donations, grants, subsidies and publicity. The best way of getting all three is to sound the alarm. If it bleeds it leads. Good news is no news.

Imagine how much money you would get if you put out an advert saying: “we now think climate change will be mild and slow, none the less please donate”. The sums concerned are truly staggering. Greenpeace and WWF, the General Motors and Exxon of the green movement, between them raise and spend a billion dollars a year globally. WWF spends $68m alone on educational propaganda. Frankly, Julian, Bjorn, Ron, Indur, Ian, Myron and I are spitting in the wind.

Yet, fifth, ironically, a further problem is complacency. The eco-pessimists are the Panglossians these days, for it is they who think the world will be fine without developing new technologies. Let’s not adopt GM food – let’s stick with pesticides.

Was there ever a more complacent doctrine than the precautionary principle: don’t try anything new until you are sure it is safe? As if the world were perfect. It is we eco-optimists, ironically, who are acutely aware of how miserable this world still is and how much better we could make it – indeed how precariously dependent we are on still inventing ever more new technologies.

I had a good example of this recently debating a climate alarmist. He insisted that the risk from increasing carbon dioxide was acute and that therefore we needed to drastically cut our emissions by 90 percent or so. In vain did I try to point out that drastically cutting emissions by 90% might do more harm to the poor and the rain forest than anything the emissions themselves might do. That we are taking chemotherapy for a cold, putting a tourniquet round our neck to stop a nosebleed.

My old employer, the Economist, is fond of a version of Pascal’s wager – namely that however small the risk of catastrophic climate change, the impact could be so huge that almost any cost is worth bearing to avert it. I have been trying to persuade them that the very same logic applies to emissions reduction.

However small is the risk that emissions reduction will lead to planetary devastation, almost any price is worth paying to prevent that, including the tiny risk that carbon emissions will destabilize the climate. Just look at Haiti to understand that getting rid of fossil fuels is a huge environmental risk.

That’s what I mean by complacency: complacently assuming that we can decarbonize the economy without severe ecological harm, complacently assuming that we can shut down world trade without starving the poor, that we can grow organic crops for seven billion people without destroying the rain forest.

Having paid homage to Julian Simon’s ideas, let me end by disagreeing with him on one thing. At least I think I am disagreeing with him, but I may be wrong.

He made the argument, which was extraordinary and repulsive to me when I first heard it as a young and orthodox eco-pessimist, that the more people in the world, the more invention. That people were brains as well as mouths, solutions as well as problems. Or as somebody once put it: why is the birth of a baby a cause for concern, while the birth of a calf is a cause for hope?

Now there is a version of this argument that – for some peculiar reason – is very popular among academics, namely that the more people there are, the greater the chance that one of them will be a genius, a scientific or technological Messiah.

Occasionally, Julian Simon sounds like he is in this camp. And if he were here today, — and by Zeus, I wish he were – I would try to persuade him that this is not the point, that what counts is not how many people there are but how well they are communicating. I would tell him about the new evidence from Paleolithic Tasmania, from Mesolithic Europe from the Neolithic Pacific, and from the internet today, that it’s trade and exchange that breeds innovation, through the meeting and mating of ideas.

That the lonely inspired genius is a myth, promulgated by Nobel prizes and the patent system. This means that stupid people are just as important as clever ones; that the collective intelligence that gives us incredible improvements in living standards depends on people’s ideas meeting and mating, more than on how many people there are. That’s why a little country like Athens or Genoa or Holland can suddenly lead the world. That’s why mobile telephony and the internet has no inventor, not even Al Gore.

Not surprisingly, academics don’t like this argument. They just can’t get their pointy heads around the idea that ordinary people drive innovation just by exchanging and specializing. I am sure Julian Simon got it, but I feel he was still flirting with the outlier theory instead.

The great human adventure has barely begun. The greenest thing we can do is innovate. The most sustainable thing we can do is change. The only limit is knowledge. Thank you Julian Simon for these insights.

2012 Julian L. Simon Memorial Award Dinner from CEI Video on Vimeo.

Anything Peaceful

Anything Peaceful is FEE’s new online ideas marketplace, hosting original and aggregate content from across the Web.

Bed Bugs Are the New Plague by Jeffrey A. Tucker

It must have been pretty rotten to sleep in, say, the 12th century Europe. Your floor was dirt. Your mattress was made from hay or bean husks. The biggest drag of all must have been the bed bug problem. It’s not so fabulous to lie there asleep while thousands of ghastly critters gnaw on your flesh. You wake with rashes all over your body.

They heal gradually in the course of the day, but, at night, it starts all over again.

No, they don’t kill you. But they surely make life desperate and miserable. They know where you are. They sense the carbon dioxide. They are after your blood, so they can stay alive. No wonder some people have been driven to suicide.

It stands to reason that among the earliest priorities of civilized life was the total eradication of bed bugs. And we did it! Thanks to modern pesticides, most especially DDT, generations knew not the bed bug.

That is, at least in capitalist countries. I have a friend from Russia whose mother explained the difference between capitalism and socialism as summed up in bed bugs. In the 1950s, capitalist countries had eliminated them. The socialist world, by contrast, faced an epidemic.

But you know what? They are back with an amazing ferocity, right here in 21st century America.

There is a new book getting rave reviews and high sales: Infested: How the Bed Bug Infiltrated Our Bedrooms and Took Over the World.

You can attend Bed Bug University, which is “an intensive four day course that covers bed bug biology and behavior, treatment protocols and explores the unique legal challenges and business opportunities of bed bugs.”

You can browse the Bed Bug Registry, with dozens of reports coming in from around the country. You can call a local company that specializes in keeping them at bay.

Welcome to the post-DDT world in which fear of pesticides displaced fear of the thing that pesticides took away. Oh, how glorious it is to embrace nature and all its ways — until nature begins to feed on you in your sleep.

The various restrictions and bans from the 1970s have gradually brought back the nightmares that wonderful, effective, killer chemicals took away. Some people claim that today not even DDT works because the new strain of bed bug is stronger than ever.

Forget innovating with new pesticides: the restrictions are just too tight. There is not a single product at your local big box hardware store that can deal with these blood suckers. And the products that more-or-less work that are available online, such as Malathion, are not approved for indoor use — and I know for sure that everyone obeys such rules!

In our current greeny ethos, people are suggesting “natural” methods such as: “take all of your laundry and bedding to the Laundromat and wash and dry it at high temperatures.”

Why not do it at home? Well, thanks to federal regulations, your hot water heater is shipped with a high temperature of 110 degrees, which is something like a luxurious bath for the bed bug. Add your detergent — which, by government decree, no longer has phosphates — and your wash turns into Mr. Bubble happy time for Mr. Bed Bug.

So you could stand over gigantic pots of boiling water in your kitchen, fishing beddings in and out, beating your mattresses outside with sticks, and otherwise sleeping in plastic bags, like they do in the new season of “Orange Is the New Black.” You know, like in prison. Or like in the 12th century.

No matter how modernized we become, no matter how many smartphones and tablets we acquire, we still have to deal with the whole problem of nature trying to eat us — in particular, its most wicked part, the man-eating insect. There is no app for that.

Google around on how many people die from mosquitos, and you are immediately struck by the ghastly reality: These things are even more deadly than government. And that’s really saying something.

But somehow, starting in the late 1960s, we began to forget this. Capitalism achieved a wonderful thing, and we took it for granted. We banned the chemicals that saved us, and gradually came to prohibit the creation of more. We feared a “Silent Spring” but instead created a nation in which the noise we hear at night is of an army of bugs sinking their teeth into our flesh.

A little silence would be welcome.

So here we are: mystified, afraid to lie down and sleep, afraid to buy a sofa from Craigslist, boiling our sheets, living in fear of things we can’t see. It’s the Dark Ages again. It gets worse each year, especially during summer when the bed bugs leave their winter hibernation and gather en masse to become our true and living nightmare.

How bad does it have to get before we again unleash the creative forces of science and capitalism to restore a world that is livable for human beings?


Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.

The New Republic: The Dumb Libertarian Era Is Here by Max Borders

As civilized human beings, we are the inheritors, neither of an inquiry about ourselves and the world, nor of an accumulating body of information, but of a conversation, begun in the primeval forests and extended and made more articulate in the course of centuries. It is a conversation which goes on both in public and within each of ourselves. – Michael Oakeshott

What do academics see when they stare down upon the rest of America? Columbia’s Mark Lilla, at least, thinks he sees a “libertarian age.”

Writing in the New Republic, Lilla wraps his punchline in a shroud of obscurity, concluding,

The libertarian age is an illegible age. It has given birth to a new kind of hubris unlike that of the old master thinkers.

Our hubris is to think that we no longer have to think hard or pay attention or look for connections, that all we have to do is stick to our “democratic values” and economic models and faith in the individual and all will be well.

Having witnessed unpleasant scenes of intellectual drunkenness, we have become self-satisfied abstainers removed from history and unprepared for the challenges it is already bringing.

Lilla suggests the old master thinkers knew better how to understand the great arc of history because they had an ideology. But we don’t.

“Our libertarianism operates differently,” writes Lilla, “it is supremely dogmatic, and like every dogma it sanctions ignorance about the world, and therefore blinds adherents to its effects in that world. It begins with basic liberal principles — the sanctity of the individual, the priority of freedom, distrust of public authority, tolerance — and advances no further.”

Now that’s strange. The normal line is that libertarians are too ideological. Of course it’s true that a form of libertarianism that advances no further than a few platitudes or axioms would be an anemic sort of libertarianism.

But the point of libertarianism is not to fill our lives with specific virtues and values; rather, it is to provide a superstructure for various moral communities to coexist peacefully.

A Libertarian Age?

Even if one agrees a libertarian age is upon us, the cock has only just crowed. According to Lilla, though, because this age is not rooted in an ideology, it is marked by an errant attitude that somehow washed over us after the fall of communism in place of all ideology. If that’s the case, why call it “libertarian”?

To describe this age as Lilla does is to fundamentally misunderstand the wordlibertarian, or at least to use it haphazardly as a convenient, if denigrating label. To misunderstand the word is also a failure to appreciate a living tradition that is only now beginning to flower in the digital era.

When I think about that rich, expanding tradition, I think of economic historian Deirdre McCloskey. She offers the kinds of connections Lilla might like to see, especially in her excellent The Bourgeois Virtues. I doubt, however, those connections are the ones Lilla would like us to draw.

Here’s McCloskey choosing not to abstain:

The master narrative of High Liberalism [modern, left-liberalism] is mistaken factually.

Externalities do not imply that a government can do better. Publicity does better than inspectors in restraining the alleged desire of businesspeople to poison their customers. Efficiency is not the chief merit of a market economy: innovation is. Rules arose in merchant courts and Quakers fixed prices long before governments started enforcing them.

I know such replies will be met with indignation. But think it possible you may be mistaken, and that merely because an historical or economic premise is embedded in front page stories in the New York Times [or The New Republic] does not make them sound as social science.

It seems to me that a political philosophy based on fairy tales about what happened in history or what humans are like is going to be less than useless. It is going to be mischievous.

It’s true. There is no ideology here, just the sum of facts.

A Narrative, an Ideology

But Lilla thinks he has a different and better narrative about history — one that is not so devoid of ideology. It’s difficult to say what that narrative is, because Lilla is so vague in his critique — so much so that one wonders if he’s simply dissatisfied with the want of ideology and hopes to put a sticker on it. He reaches for a sticker. “Libertarian” will do.

The closest we get to any proposed counternarrative comes in who Lilla would award for attempting to fix the Middle East: “The next Nobel Peace Prize should not go to a human rights activist or an NGO founder. It should go to the thinker or leader who develops a model of constitutional theocracy giving Muslim countries a coherent way of recognizing yet limiting the authority of religious law and making it compatible with good governance.”

Notice he did not say a working model, nor a successfully implemented model. Just a model. Despite the nod to a people’s history and culture, he wants to see more intellectuals with models.

Political philosopher Michael Oakeshott once said, “Like Midas, the Rationalist is always in the unfortunate position of not being able to touch anything, without transforming it into an abstraction; he can never get a square meal of experience.”

But that’s just the problem with models and planning, says Deirdre McCloskey:

How do I know that my narrative is better than yours? The experiments of the 20th century told me so. It would have been hard to know the wisdom of Friedrich Hayek or Milton Friedman or Matt Ridley or Deirdre McCloskey in August of 1914, before the experiments in large government were well begun.

But anyone who after the 20th century still thinks that thoroughgoing socialism, nationalism, imperialism, mobilization, central planning, regulation, zoning, price controls, tax policy, labor unions, business cartels, government spending, intrusive policing, adventurism in foreign policy, faith in entangling religion and politics, or most of the other thoroughgoing 19th-century proposals for governmental action are still neat, harmless ideas for improving our lives is not paying attention.

Or perhaps they’re failing to “look for connections.”

No Good Reason

But there’s more. Lilla writes:

Libertarianism’s dogmatic simplicity explains why people who otherwise share little can subscribe to it: small-government fundamentalists on the American right, anarchists on the European and Latin American left, democratization prophets, civil liberties absolutists, human rights crusaders, neoliberal growth evangelists, rogue hackers, gun fanatics, porn manufacturers, and Chicago School economists the world over.

The dogma that unites them is implicit and does not require explication; it is a mentality, a mood, a presumption — what used to be called, non-pejoratively, a prejudice.

Got that? A mood. A dogma. A prejudice.

Let’s assume that we all agree about what the words dogma and prejudice mean. A dogma is not an ideology because it offers no reasons for anyone’s commitments. A prejudice is simply a disposition to believe something, perhaps also for no good reason at all.

That means libertarians have no good reason to be suspicious of power (such as police power excesses in Baltimore or Ferguson), no good reason to commit to smaller government (like bank bailouts or military adventurism), no reason to believe that open trade helps the world develop (despite all the evidence), no reason to protect expression, no reason to acknowledge the social benefits of emergent order, and no reason to create a digital currency (Argentine inflation is fine).

Voluntary cooperation or the free flow of ideas, people, capital, and goods? These are all just byproducts of our dumb post-ideological age. Why? Because, according to Lilla, libertarianism is just a dogma.

To understand history through the lens of people with power screwing things up more than helping is not an abstention, and it is not illegible. The relationship between people with coercive power and the rest is our historical-ideological filter, and that’s just for starters.

Rational Irrationality

Lilla’s mischief does not just extend to history. That failure to understand libertarianism hangs about his thesis, too.

For example, a libertarian does not admire “democratic values,” as Lilla suggests. These are the values of those who would trade in the one-headed master with the many-headed one. Libertarians don’t find much value in masters at all.

Majoritarian elections don’t harness the wisdom of crowds, as Bryan Caplan reminds us in The Myth of the Rational VoterSuch wisdom can only be gained by people who are more directly accountable for their actions, who have more skin in the game, or who feel the invisible threads of community animating them in common missions. That’s not electoral politics, though.

Voters, as such, are hopelessly biased, because they don’t pay directly for what they pray for in the voting booth. So yeah, democracy is overrated. It’s certainly not something most libertarians wish to export or impose on people with twelfth-century cultures and mores. Nor is it is a twenty-first century social operating system for a free people.

Libertarians prefer organizations, markets, and community groups that compete for mindshare and marketshare. But organizations, markets, and community groups only emerge in the fertile soil of free institutions. That’s why libertarians like voluntary systems with rule of law, porous borders, and rights of exit.

Individuals coordinate either in support of organizational goals, or they participate in an order no individual could have planned. Both forms of order are beautiful — at least to the libertarian. But we certainly don’t expect to find such orders everywhere.

The Problem of Power

What about acquiescence to “public authority”? Yes, we are skeptical. And it’s true we are more interested in shedding authority, because power interferes with people’s life projects and communities. We don’t have this skepticism due to habit or breeding. We have it because we want to live the kind of happy and fulfilled lives that comes in a decentralized discovery process, which doesn’t figure into any planner’s plans. Yet planners are constantly trying to plan despite those life projects. You might say we’re not living in a “libertarian age,” but in a regulated age.

But Lilla insists our libertarian age is one marked by people failing to “think hard, or pay attention, or look for connections.” This is the sort of thing that might make progressives in the New York salon nod in vigorous assent, but it’s the nodding of those who have no idea what they’re talking about, the affectations and social signals of the salon.

The libertarian worldview is not based on technocratic dreams, government largess, or “communitarian” fancies in which elites concoct statutory schemes to blanket the land with unitary control. If this were really in a libertarian age, we would not be arguing over whether or not we are “self-satisfied abstainers.”

We would have a lot more opt-in systems — not everywhere, but in enough places, including the U.S. We would be a nation of joiners again. We could, as Paul Emile de Puydt suggested, “move from republic to monarchy, from representative government to autocracy, from oligarchy to democracy, or even to Mr. Proudhon’s anarchy — without even the necessity of removing [our] dressing gown or slippers.”

But this is not the age we live in.

The Coming Libertarian Age

The coming libertarian age will be marked not by a failure to think about the meaning of history. It will be marked by people participating in the creation of new communities, governance structures, businesses, and networks — building them up like coral reefs.

“Everyday forms of resistance make no headlines,” says James C. Scott in Two Cheers for Anarchism.

Just as millions of anthozoan polyps create, willy-nilly, a coral reef, so do thousands upon thousands of individual acts of insubordination and evasion create a political or economic barrier reef of their own. There is rarely any dramatic confrontation, any moment that is particularly newsworthy.

And whenever, to pursue the simile, the ship of state runs aground on such a reef, attention is typically directed to the shipwreck itself and not to the vast aggregation of petty acts which made it possible

If there is anything to terrify Lilla and the New Republic, it is that libertarian age. Technocracy runs aground on the coral reefs of genuine connection and decentralized market participation.

So in order to critique this “new kind of hubris,” Lilla should really tell us more about the hubris of the old master thinkers. I recall the organized-perfection society of Plato, whose order would be planned based on some, well, Platonic ideal about the virtuous person who would rule. Perhaps Lilla is referring to master thinkers like Bentham, who reduced humanity to an aggregate of hedonic calculation machines, which has given rise to an entire field of mathematical macroeconomics that lobotomizes the individual and ignores real people. Then there is of course Karl Marx, whose ideology left scores of millions destitute or dead.

Lilla cautions us not to ignore Marx’s concerns, even though the Marxists themselves left scorched earth. We still need ideology, he thinks:

The end of the cold war destroyed whatever confidence in ideology still remained in the West. But it also seems to have destroyed our will to understand. We have abdicated. The libertarian dogma of our time is turning our polities, economies, and cultures upside down—and blinding us to this by making us even more self-absorbed and incurious than we naturally are. The world we are making with our hands is as remote from our minds as the farthest black hole. Once we had a nostalgia for the future. Today we have an amnesia for the present.

Destroyed our will to understand? Libertarian dogma means “turning our polities upside down”? Making us self-absorbed? What in the world is he talking about?

Is he referring to those self-absorbed and benighted souls who brought down the Berlin Wall? Or is he simply disturbed that all they could find to do after communism’s fall was start shops and buy heavy metal albums? Maybe it’s their children — the millennials with their texting and their selfies.

He doesn’t really say. He only seems to suggest we need more Isaiah Berlins. Fair enough. At least give us something we can sink our teeth into. In conflating democracy with libertarianism, perhaps Lilla thinks voters are in fact too dumb to rule and that a wise, though considerably less hubristic, elite could show us the way if we weren’t so distracted by modern amusements.

But apart from evoking the bugbear of “neoliberalism” and praying for a theocratic modeler for the Middle East he’s scant on details. Instead, all he can offer is that we have “amnesia for the present.”

Sounds deep: chicken soup for the progressive soul. To show that we’re in a vapid libertarian age, Mr. Lilla needs to cite evidence and name names. Otherwise, it’s just the same innuendo and intimation we’ve come to expect from those prepared to spin out caricatures or just-so stories to slap the L-word on them.

In the Mood

So, Dear Reader, take with you your dogmas and your prejudices and make this world freer one act of defiance at a time. Why not? Because it’s fun — just a mood — and we have the excuse of living in that insipid age.

Your dream community, your world-changing innovation, or your preferred causes have no relevance there in the Department of History at Columbia University. Participate then in the creation of your self-absorbed fantasies with a thousand acts of permissionless kindness, a thousand dollars of investment in a small business, or a thousand lines of code.

What will flow from your dogmas and your prejudices is a great coral reef — one that is created by you and others locking arms in solidarity around a thousand different causes. And may the ship of state run aground on it.

Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

Rome: Money, Mischief, and Minted Crises by Marc Hyden & Lawrence W. Reed

Ancient Rome wasn’t built in a day, the old adage goes. It wasn’t torn down in a day either, but a good measure of its long decline to oblivion was the government’s bad habit of chipping away at the value of its own currency.

In this essay we refer to “inflation,” but in its classical sense — an increase in the supply of money in excess of the demand for money. The modern-day subversion of the term to mean rising prices, which are one key effect of inflation but not the inflation itself, only confuses the matter and points away from the real culprit, the powers in charge of the money supply.

In Rome’s day, before the invention of the printing press, money was gold and silver coin. When Roman emperors needed revenue, they did more than just tax a lot; like most governments today, they also debased the money. Think of the major difference between Federal Reserve inflation and ancient Roman inflation this way: We print, they mint(ed). The long-term effects were the same—higher prices, erosion of savings and confidence, booms and busts, and more. Here’s the Roman story.

Augustus (reigned 27 BC – 14 AD), Rome’s first real emperor, worked to establish a standardized system of coinage for the empire, building off of the Roman Republic’s policies. The silver denarius became the “link coin” to which other baser and fractional coins could be exchanged and measured. Augustus set the weight of the denarius at 84 coins to the pound and around 98 percent silver. Coins, which had only been sporadically used to pay for state expenditures in the earlier Republic, became the currency for everyday citizens and accepted as payment for commerce and even taxation in the later Republic and into the imperial period.

Historian Max Shapiro, in his 1980 book, The Penniless Billionaires, pieces various sources together to conclude that “the volume of money he (Augustus) issued in the two decades between 27 BC and 6 AD was more than ten times the amount issued by his predecessors in the twenty years before.” The easy money stimulated a temporary boom, leading inevitably to price hikes and eventual retrenchment. Wheat and pork prices doubled, real estate rose at first by more than 150 percent. When money creation was slowed (late in Augustus’s reign and even more for a time under that of his successor, Tiberius), the house of cards came tumbling down. Prices stabilized but at the cost of recession and unemployment.

The integrity of the monetary system would remain intact until the reign of Emperor Nero (54-68 AD). He is better known for murdering his mother, preferring the arts to civic administration, and persecuting the Christians, but he was also the first to debase the standard set by Augustus. By 64 AD, he drained the Roman reserves because of the Great Fire of Rome and his profligate spending (including a gaudy palace). He reduced the weight of the denarius to 96 coins per pound and its silver content to 93 percent, which was the first debasement of this magnitude in over 250 years. This led to inflation and temporarily shook the confidence of the Roman citizenry.

Many successive emperors incrementally lowered the denarius’s silver content until the philosopher-emperor, Marcus Aurelius (reigned 161-180 AD), further debased the denarius to 79 percent silver to pay for constant wars and increased expenses. This was the most impure standard set for the denarius up to this point in Roman history, but the trend would continue. Aurelius’ son Commodus(reigned 177-192 AD), a gladiatorial wannabe, was likewise a spendthrift. He followed the footsteps of his forebears and reduced the denarius to 104 coins to the pound and only 74 percent silver.

Every debasement pushed prices higher and gradually chipped away at the public faith in the Roman monetary system. The degradation of the money and increased minting of coins provided short-term relief for the state until merchants, legionaries, and market forces realized what had happened. Under Emperor Septimius Severus’ administration (reigned 193-211 AD), more soldiers began demanding bonuses to be paid in gold or in commodities to circumvent the increasingly diminished denarius. Severus’ son, Caracalla (reigned 198-217 AD), while remembered for his bloody massacres, killing his brother, and being assassinated while relieving himself, advanced the policy of debasement until he lowered the denarius to nearly 50 percent silver to pay for the Roman war machine and his grand building projects.

Other emperors, including Pertinax and Macrinus, attempted to put Rome back on solid footing by increasing the silver content or by reforming the system, but often when one emperor improved the denarius, a competitor would outbid them for the army’s loyalty, destroying any progress and often replacing the emperor. Eventually, the sun set on the silver denarius as Rome’s youngest sole emperor, Gordian III (238-244 AD), essentially replaced it with its competitor, the antoninianus.

However, by the reign of the barbarian-born Emperor Claudius II (reigned 268-270 AD), remembered for his military prowess and punching a horse’s teeth out, the antoninianus was reduced to a lighter coin that was less than two percent silver. The aurelianianus eventually replaced the antoninianus, and the nummus replaced the aurelianianus. By 341 AD, Emperor Constans I (reigned 337-350 AD) diminished the nummus to only 0.4 percent silver and 196 coins per pound. The Roman monetary system had long crashed and price inflation had been spiraling out of control for generations.

Attempts were made to create new coins similar to the Neronian standard in smaller quantities and to devise a new monetary system, but the public confidence was shattered. Emperor Diocletian (reigned 284-305 AD) is widely known for conducting the largest Roman persecution of Christians, but he also reformed the military, government, and monetary system. He expanded and standardized a program, the annona militaris, which essentially bypassed the state currency. Many Romans were now taxed and legionaries paid in-kind (with commodities).

Increasingly, Romans bartered in the marketplace instead of exchanging state coins. Some communities even created a “ghost currency,” a nonexistent medium to accurately describe the cost and worth of a product because of runaway inflation and the volatility of worthless money. Diocletian approved a policy which led to the gold standard replacing the silver standard. This process progressed into the reign of Rome’s first Christian emperor, Constantine (reigned 306-337 AD), until Roman currency began to temporarily resemble stability.

But Diocletian did something else, and it yielded widespread ruin from which the Empire never fully recovered. In the year 301 AD, to combat the soaring hyperinflation in prices, he issued his famous “Edict of 301,” which imposed comprehensive wage and price controls under penalty of death. The system of production, already assaulted by confiscatory taxes and harsh regulations as well as the derangement of the currency, collapsed. When a successor abandoned the controls a decade or so later, the Roman economy was in tatters.

The two largest expenditures in the Roman Empire were the army, which peaked at between 300,000-600,000 soldiers, and subsidized grain for around 1/3 of the city of Rome. The empire’s costs gradually increased over time as did the need for bribing political enemies, granting donatives to appease the army, purchasing allies through tributes, and the extravagance of Roman emperors. Revenues declined in part because many mines were exhausted, wars brought less booty into the empire, and farming decreased due to barbarian incursions, wars, and increased taxation. To meet these demands Roman leaders repeatedly debased the silver coins, increasingly minted more money, and raised taxes at the same time.

In a period of about 370 years, the denarius and its successors were debased incrementally from 98 percent to less than one percent silver. The massive spending of the welfare/warfare state exacted a terrible toll in the name of either “helping” Romans or making war on non-Romans. Financial and military crises mixed with poor leadership, expediency, and a clear misunderstanding of economic principles led to the destruction Rome’s monetary system.

Honest and transparent policies could have saved the Romans from centuries of economic hardships. The question future historians will answer when they look back on our period is, “What did the Americans learn from the Roman experience?”

(For more on lessons from ancient Rome, visit www.fee.org/rome).

Marc Hyden is a political activist and an amateur Roman historian. Lawrence W. Reed is President of the Foundation for Economic Education.

Marc Hyden

Marc Hyden is a conservative political activist and an amateur Roman historian.

Lawrence W. Reed

Lawrence W. (“Larry”) Reed became president of FEE in 2008 after serving as chairman of its board of trustees in the 1990s and both writing and speaking for FEE since the late 1970s.

Capitalism Defused the Population Bomb by Chelsea German

Journalists know that alarmism attracts readers. An article in the British newspaper the Independent titled, “Have we reached ‘peak food’? Shortages loom as global production rates slow” claimed humanity will soon face mass starvation.

Just as Paul Ehrlich’s 1968 bestseller The Population Bomb  predicted that millions would die due to food shortages in the 1970s and 1980s, the article in 2015 tries to capture readers’ interest through unfounded fear. Let’s take a look at the actual state of global food production.

The alarmists cite statistics showing that while we continue to produce more and more food every year, the rate of acceleration is slowing down slightly. The article then presumes that if the rate of food production growth slows, then widespread starvation is inevitable.

This is misleading. Let us take a look at the global trend in net food production, per person, measured in 2004-2006 international dollars. Here you can see that even taking population growth into account, food production per person is actually increasing:

Food is becoming cheaper, too. As K.O. Fuglie and S. L. Wang showed in their 2012 article “New Evidence Points to Robust but Uneven Productivity Growth in Global Agriculture,” food prices have been declining for over a century, in spite of a recent uptick:

In fact, people are better nourished today than they ever have been, even in poor countries. Consider how caloric consumption in India increased despite population growth:

Given that food is more plentiful than ever, what perpetuates the mistaken idea that mass hunger is looming? The failure to realize that human innovation, through advancing technology and the free market, will continue to rise to meet the challenges of growing food demand.

In the words of HumanProgress.org Advisory Board member Matt Ridley, “If 6.7 billion people continue to keep specializing and exchanging and innovating, there’s no reason at all why we can’t overcome whatever problems face us.”

This idea first appeared at Cato.org.

Is the “Austrian School” a Lie?

Is Austrian economics an American invention? by STEVEN HORWITZ and B.K. MARCUS.

Do those of us who use the word Austrian in its modern libertarian context misrepresent an intellectual tradition?

We trace our roots back through the 20th century’s F.A. Hayek and Ludwig von Mises (both served as advisors to FEE) to Carl Menger in late 19th-century Vienna, and even further back to such “proto-Austrians” as Frédéric Bastiat and Jean-Baptiste Say in the earlier 19th century and Richard Cantillon in the 18th. Sometimes we trace our heritage all the way back to the late-Scholastic School of Salamanca.

Nonsense, says Janek Wasserman in his article “Austrian Economics: Made in the USA”:

“Austrian Economics, as it is commonly understood today,” Wasserman claims, “was born seventy years ago this month.”

As his title implies, Wasserman is not talking about the publication of Principles of Economics by Carl Menger, the founder of the Austrian school. That occurred 144 years ago in Vienna. What happened 70 years ago in the United States was the publication of F.A. Hayek‘s Road to Serfdom.

What about everything that took place — most of it in Austria — in the 74 years before Hayek’s most famous book? According to Wasserman, the Austrian period of “Austrian Economics” produced a “robust intellectual heritage,” but the largely American period that followed was merely a “dogmatic political program,” one that “does a disservice to the eclectic intellectual history” of the true Austrian school.

Where modern Austrianism is “associated with laissez-faire economics and libertarianism,” the real representatives of the more politically diverse tradition — economists from the University of Vienna, such as Fritz Machlup, Joseph Schumpeter, and Oskar Morgenstern — were embarrassed by their association with Hayek’s bestseller and its capitalistic supporters.

These “native-born Austrians ceased to be ‘Austrian,'” writes Wasserman, “when Mises and a simplified Hayek captured the imagination of a small group of businessmen and radicals in the US.”

Wasserman describes the popular reception of the as “the birth of a movement — and the reduction of a tradition.”

Are we guilty of Wasserman’s charges? Do modern Austrians misunderstand our own tradition, or worse yet, misrepresent our history?

In fact, Wasserman himself is guilty of a profound misunderstanding of the Austrian label, as well as the tradition it refers to.

The “Austrian school” is not a name our school of thought took for itself. Rather it was an insult hurled against Carl Menger and his followers by the adherents of the dominant German Historical School.

The Methodenstreit was a more-than-decade-long debate in the late 19th century among German-speaking social scientists about the status of economic laws. The Germans advocated methodological collectivism, espoused the efficacy of government intervention to improve the economy, and, according Jörg Guido Hülsmann, “rejected economic ‘theory’ altogether.”

The Mengerians, in contrast, argued for methodological individualism and the scientific validity of economic law. The collectivist Germans labeled their opponents the “Austrian school” as a put-down. It was like calling Menger and company the “backwater school” of economic thought.

“Austrian,” in our context, is a reclaimed word.

But more important, modern Austrian economics is not the dogmatic ideology that Wasserman describes. In his blog post, he provides no actual information about the work being done by the dozens of active Austrian economists in academia, with tenured positions at colleges and universities whose names are recognizable.

He tells his readers nothing about the  books they have produced that have been published by top university presses. He does not mention that they have published in top peer-reviewed journals in the economics discipline, as well as in philosophy and political science, or that the Society for the Development of Austrian Economics consistently packs meeting rooms at the Southern Economic Association meetings.

Have all of these university presses, top journals, and long-standing professional societies, not to mention tenure committees at dozens of universities, simply lost their collective minds and allowed themselves to be snookered by an ideological sleeper cell?

Or perhaps in his zeal to score ideological points of his own, Wasserman chose to take his understanding of Austrian economics from those who consume it on the Internet and elsewhere rather than doing the hard work of finding out what professional economists associated with the school are producing. Full of confirmation bias, he found what he “knew” was out there, and he ends up offering a caricature of the robust intellectual movement that is the contemporary version of the school.

The modern Austrian school, which has now returned to the Continent and spread across the globe after decades in America, is not the dogmatic monolith Wasserman contends. The school is alive with both internal debates about its methodology and theoretical propositions and debates about its relationship to the rest of the economics discipline, not to mention the size of the state.

Modern Austrian economists are constantly finding new ideas to mix in with the work of Menger, Böhm-Bawerk, Mises, and Hayek. The most interesting work done by Austrians right now is bringing in insights from Nobelists like James Buchanan, Elinor Ostrom, and Vernon Smith, and letting those marinate with their long-standing intellectual tradition. That is hardly the behavior of a “dogmatic political program,” but is rather a sign of precisely the robust intellectual tradition that has been at the core of Austrian economics from Menger onward.

That said, Wasserman is right to suggest that economic science is not the same thing as political philosophy — and it’s true that many self-described Austrians aren’t always careful to communicate the distinction. Again, Wasserman could have seen this point made by more thoughtful Austrians if he had gone to a basic academic source like the Concise Encyclopedia of Economics and read the entry on the Austrian school of economics.

Even a little bit of actual research motivated by actual curiosity about what contemporary professional economists working in the Austrian tradition are doing would have given Wasserman a very different picture of modern Austrian economics. That more accurate picture is one very much consistent with our Viennese predecessors.

To suggest that we do a disservice to our tradition — or worse, that we have appropriated a history that doesn’t belong to us — is to malign not just modern Austrians but also the Austrian-born antecedents within our tradition.

Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

B.K. Marcus

B.K. Marcus is managing editor of the Freeman.