The Ghosts of Spying Past by Gary McGath

In the 1990s, the Clinton administration fought furiously against privacy and security in communication, and we’re still hurting from it today. Yet people in powerful positions are trying to commit the same mistakes all over again.

In the early days, the Internet was thoroughly insecure; its governmental and academic users trusted each other, and the occasional student prank couldn’t cause much damage. As it started becoming available to everyone in the early ‘90s, people saw the huge opportunities it offered for commerce.

But doing business safely requires data security: If unauthorized parties can grab credit card numbers or issue fake orders, nobody is safe. However, the Clinton administration considered communication security a threat to national security.

Attorney General Janet Reno said, “Without encryption safeguards, all Americans will be endangered.” She didn’t mean that we needed the safeguard of encryption, but that we had to be protected from encryption.

In a 1996 executive order, President Clinton stated:

I have determined that the export of encryption products described in this section could harm national security and foreign policy interests even where comparable products are or appear to be available from sources outside the United States, and that facts and questions concerning the foreign availability of such encryption products cannot be made subject to public disclosure or judicial review without revealing or implicating classified information that could harm United States national security and foreign policy interests.

The government prohibited the export of strongly secure encryption technology by calling it a “munition.” Putting code on the Internet makes it available around the world, so the restriction crippled secure communication. The Department of Justice investigated Phil Zimmerman for three years for making a free email encryption program, PGP, available.

The administration also tried to mandate government access to all strong encryption keys. In 1993 it proposed making the Clipper Chip, with a built-in “back door” for government spying, the standard for serious encryption. Any message it sent included a 128-bit field that would let government agencies (and hopefully no one else) decrypt it.

But the algorithm for the Clipper was classified, making independent assessments impossible. However strong it was, it would have offered a single point to attack, with the opportunity to intercept virtually unlimited amounts of data as an incentive to find weaknesses. Security experts pointed out the inherent risks inherent in the key recovery process.

By the end of the ‘90s, the government had apparently yielded to public pressure and common sense and lifted the worst of the restrictions. It didn’t give up, though — it just got sneakier.

Documents revealed by Edward Snowden show that the NSA embarked on a program to install back doors through secret collaboration with businesses. It sought, in its own words, to “insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices” and “shape the worldwide cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA/CSS.”

The NSA isn’t just a spy agency; it’s one of the leading centers of expertise in encryption, perhaps the best in the world. Businesses and other organizations trying to maximize their data security trust its technical recommendations — or at least they used to. If it can’t get the willing collaboration of tech companies, it can deceive them with broken standards.

Old software with government-required weaknesses from the nineties is still around, along with newer software that may have NSA-inspired weaknesses. There are still restrictions on the exporting of cryptography in many cases, depending on a complicated set of criteria related to the software’s purpose. Even harmless file identification software, used mostly by librarians, may have to carry a warning that it contains decryption code and might be subject to use restrictions.

With today’s vastly more powerful computers, encryption that was strong two decades ago can be easily broken today. Some websites, especially ones outside the United States that were denied access to strong encryption, still use the methods which they were stuck with then, and so do some old browsers.

To deal with this, many browsers support the old protocols when a site offers nothing stronger, and many sites fall back to the weak protocols if a browser is limited to them. Code breakers have found ways to make browsers think only weak security is available and force even the stronger sites to fall back on it. Some sites have disabled weak encryption, only to be forced to restore it because so many users have old browsers.

You’d think that by now people would understand that secure transactions are essential, but politicians in the US and other countries still want to weaken encryption so they can spy on people’s communications.

The FBI’s assistant director of counter-terrorism claims that strong encryption gives terrorists “a free zone by which to radicalize, plot, and plan.” NSA Director Michael S. Rogers has said, “I don’t want a back door. I want a front door.” UK Prime Minister Cameron says,

In extremis, it has been possible to read someone’s letter, to listen to someone’s call, to mobile communications. The question remains: are we going to allow a means of communications where it simply is not possible to do that? My answer to that question is: no, we must not.

In 2015 over eighty civil society organizations, companies, and trade associations, including Apple, Microsoft, Google, and Adobe, sent a public letter to President Obama expressing concern about such actions. The letter states:

Strong encryption is the cornerstone of the modern information economy’s security. Encryption protects billions of people every day against countless threats — be they street criminals trying to steal our phones and laptops, computer criminals trying to defraud us, corporate spies trying to obtain our companies’ most valuable trade secrets, repressive governments trying to stifle dissent, or foreign intelligence agencies trying to compromise our and our allies’ most sensitive national security secrets.

In the United States, we have a tradition of free speech, but in many countries, even mild criticism of the authorities needs to travel in secret.

A country can pass laws to weaken its law-abiding citizens’ access to cryptography, but criminals and terrorists exchanging secret messages would have no reason to pay attention to them. They can keep using the strong encryption methods that are currently available and get new software from countries that don’t have those restrictions.

Governments would gain increased ability to spy on people who follow the law, and so would free-lance data thieves, while competent criminals would still be able to communicate in secret. To crib David Cameron, we must not let that happen — again.

Gary McGath

Gary McGath is a freelance software engineer living in Nashua, New Hampshire.


Encryption stalemate: A never-ending saga?

Why Cameron’s encryption limitations will go nowhere

The dynamic Internet marketplace at work: Consumer demand is driving Google and Yahoo encryption efforts

The New Paganism? The Case against Pope Francis’s Green Encyclical by Max Borders

Paganism as a distinct and separate religion may perhaps be said to have died, although, driven out of the cities, it found refuge in the countryside, where it lingered long — and whence, indeed, its very name is derived. In a very real sense, however, it never died at all. It was only transformed and absorbed into Christianity. – James Westfall Thompson, An Introduction to Medieval Europe

In 2003, science-fiction writer Michael Crichton warned a San Francisco audience about the sacralization of the environment. Drawing an analogy between religion and environmentalism, Crichton said:

There’s an initial Eden, a paradise, a state of grace and unity with nature, there’s a fall from grace into a state of pollution as a result of eating from the tree of knowledge, and as a result of our actions there is a judgment day coming for us all.

We are all energy sinners, doomed to die, unless we seek salvation, which is now called sustainability. Sustainability is salvation in the church of the environment. Just as organic food is its communion, that pesticide-free wafer that the right people with the right beliefs, imbibe.

This analogy between religion and environmentalism is no longer a mere analogy.

Pope Francis, the highest authority in the Catholic Church — to whom many faithful look for spiritual guidance — has now fused church doctrine with environmental doctrine.

Let’s consider pieces of his recently released Encyclical Letter. One is reminded of a history in which the ideas of paganism (including the worship of nature) were incorporated into the growing medieval Church.

Excerpts from Pope Francis are shown in italics.


This sister protests the evil that we provoke, because of the irresponsible use and of the abuse of the goods that God has placed in her. We grew up thinking that we were its owners and rulers, allowed to plunder it.

Notice how Pope Francis turns the earth into a person. Sister. Mother. This kind of anthropomorphic trope is designed to make you think that, by virtue of driving your car, you’re also smacking your sibling. We’ve gone from “dominion over the animals and crawling things” to “plundering” our sister.

The violence that exists in the human heart wounded by sin is also manifested in the symptoms of the disease we feel in soil, water, air and in the living things. Therefore, among the most abandoned and ill treated poor we find our oppressed and devastated Earth, which “moans and suffers the pains of childbirth” [Romans 8:22].

First, if the state of the soil, water and air and living things is indeed symptomatic of our violent, sinful hearts, then the good news is that sin is on the decline. On every dimension the Pope names, the symptoms of environmental harm are getting better all the time — at least in our decadent capitalist country.

Do not take it on faith: here are data.

There are forms of pollution which affect people every day. The exposure to air pollutants produces a large spectrum of health effects, in particular on the most poor, and causes millions of premature deaths.

This will always be true to some degree, of course, but it’s less true than any time in human history. Pope Francis fails to acknowledge the tremendous gains humanity has made. For example, human life expectancy in the Paleolithic period (call this “Eden”) was 33 years. Life expectancy in the neolithic period was 20 years. Globally, life expectancy is now more than 68 years, and in the West, it is passing 79 years.

Yes, there is pollution, and, yes, the poor are affected by it. But the reason why the poor are affected most by air pollution is because they’re poor — and because they don’t have access to fossil fuel energy. Pope Francis never bothers to draw the connection between wealth and health because he thinks of both production and consumption as sinful. Brad Plumer writes at Vox,

About 3 billion people around the world — mostly in Africa and Asia, and mostly very poor — still cook and heat their homes by burning coal, charcoal, dung, wood, or plant residue in their homes. These homes often have poor ventilation, and the smoke can cause all sorts of respiratory diseases.

The wealthy people of the West, including Pope Francis, don’t suffer from this problem. That’s because liberal capitalist countries — i.e., those countries who “plunder” their sister earth — do not suffer from energy poverty. They do not suffer from inhaling fumes and particulate matter from burning dung becausethey are “sinful,” because they are capitalist.

See the problem? The Pope wants to have it both ways. He has confused the disease (unhealthy indoor air pollution) with the cure (cheap, clean, abundant and mass-produced energy from fossil fuels).

Add to that the pollution that affects all, caused by transportation, by industrial fumes, by the discharge of substances which contribute to the acidification of soil and water, by fertilizers, insecticides, fungicides, herbicides and toxic pesticides in general. The technology, which, connected to finance, claims to be the only solution to these problems, in fact is not capable of seeing the mystery of the multiple relationships which exist between things, and because of this, sometimes solves a problem by creating another.

It is strange to read admonitions from someone about the “multiple relationships that exist between things,” only to see him ignore those relationships in the same paragraph. Yes, humans often create problems by solving others, but that doesn’t mean we shouldn’t solve the problems. It just means we should solve the big problems and then work on the smaller ones.

Solving problems even as we discover different problems is an inherent part of the human condition. Our creativity and innovation and struggle to overcome the hand nature has dealt us is what makes us unique as a species.

Perhaps this is, for Pope Francis, some sort of Green Original Sin: “Thou shalt just deal with it.” But to the rest of us, it is the means by which we live happier, more comfortable lives here under the firmament.

The Earth, our home, seems to turn more and more into a huge garbage dump. In many places on the planet, the elderly remember with nostalgia the landscapes of the past, which now appear to be submerged in junk.

If you get your understanding of waste management and the environment from the movie Wall-E, then you might have the impression that we’re burying our sister in garbage. But as the guys over at EconPop have pointed out, land used for waste management is also governed by laws of supply and demand — which means entrepreneurs and innovators are finding better and less expensive ways to reuse, reduce, recycle, and manage our waste.

The industrial waste as well as the chemicals used in cities and fields can produce an effect of bio-accumulation in the bodies of the inhabitants of neighboring areas, which occurs even when the amount of a toxic element in a given place is low. Many times one takes action only when these produced irreversible effects on people’s health.

People, on net, are living longer and healthier than they ever have in the history of our species. What evidence does the Holy Father have that irreversible effects on people’s health rises to the level of an emergency that demands drafting in a papal encyclical? And why focus on the costs of “chemicals” without a single mention of overwhelming their human benefit? Indeed, which chemicals? This kind of sloppy thinking is rather unbecoming of someone who is (we are constantly reminded) a trained chemist.

Certain substances can have health effects, but so can failing to produce the life-enhancing goods in the first place. The answer is not to beg forgiveness for using soaps and plastics (or whatever), but to develop the institutions that prevent people and companies from imposing harmful costs onto others without taking responsibility for it.

The key is to consider the trade-offs that we will face no matter what, not to condemn and banish “impure” and unnatural substances from our lives.

These issues are intimately linked to the culture of waste, affecting so much the human beings left behind when the things turn quickly into trash.

Now we’re getting somewhere. This is where Pope Francis would like to add consumerism to production on the list of environmentally deadly sins.

Let us realize, for example, that most of the paper that is produced is thrown away and not recycled.

Heaven forfend! So would Pope Francis have us burn fossil fuels to go around and collect processed pulp? Is he unaware that demand for paper is what drivesthe supply of new trees? We aren’t running out of trees because we throw away paper. The Pope’s plan sounds like it could have been hatched in Berkeley, California, instead of Vatican City. And yet worlds have collided.

Michael Munger puts matters a little differently:

Mandatory recycling, by definition, takes material that would not be recycled voluntarily, diverts it from the waste stream, and handles it several times before using it again in a way that wastes resources.

The only explanation for this behavior that I can think of is a religious ceremony, a sacrifice of resources as a form of worship. I have no problem if people want to do that. As religions go, it is fairly benign. Butrequiring that religious sacrifice of resources is a violation of the constitutional separation of church and state.

Well, Professor Munger, this is the Pope we’re talking about.

We find it hard to admit that the operation of natural ecosystems is exemplary: plants synthesize nutrients that feed the herbivores; these in turn feed the carnivores, which provide a lot of organic waste, which give rise to a new generation of plants. In contrast, the industrial system, at the end of its cycle of production and consumption, has not developed the ability to absorb and reuse waste and slag.

Where is the evidence for this? These are matters of faith, indeed. All this time I thought the industrial system did have the ability to absorb and reuse waste: It’s called the system of prices, property, and profit/loss. The problem is not that such a “recycling” system doesn’t exist, it’s that corruption and government distorts the system of property, prices and profit/loss so that our economic ecosystem doesn’t operate as it should.

Indeed, when you have the Pope suggesting we burn gas to save glass, you have to wonder why the industrial system is so messed up. A system that “requires us to limit the use of non-renewable resources, to moderate consumption, to maximize the efficiency of the exploitation, to reuse and to recycle,” is called the market. And where it doesn’t exist is where you’ll find the worst instances of corruption and environmental degradation.

Then, of course, there’s climate change. In the interests of brevity I won’t quote the whole thing. But here’s the punchline, which might have been plucked straight from the IPCC Summary for Policymakers:

Climate change is a global problem with serious environmental, social, economic, distribution and policy implications, and make up one of the main current challenges for humanity. The heaviest impacts will probably fall in the coming decades upon developing countries.

This might be true. What the Holy Father fails to appreciate is that the heaviest impacts of policies designed to mitigate climate change will definitely fall upon developing countries. (That is, if the developing countries swear off cheap energy and embrace any sort of global climate treaty. If history is a guide, they most certainly will not.)

Meanwhile, the biggest benefits of burning more carbon-based fossil fuels will accrue the poorest billions on earth. The Pope should mention that if he really has their interests at heart or in mind.

But many symptoms indicate that these effects could get worse if we continue the current patterns of production and consumption.

“Patterns of production and consumption”? This is a euphemism for wealth creation. What is wealth except production and consumption of resources to further human need and desire?

His suggested cure for our dangerous patterns of wealth creation, of course, is good ole demand-side management. Wiser, more enlightened minds (like his, he hopes) will let you know which light bulbs to buy, what sort of car to drive, and which insolvent solar company they’ll “invest” your money in. You can even buy papal indulgences in the form of carbon credits. As the late Alexander Cockburn wrote,

The modern trade is as fantastical as the medieval one. … Devoid of any sustaining scientific basis, carbon trafficking is powered by guilt, credulity, cynicism and greed, just like the old indulgences, though at least the latter produced beautiful monuments.

But the most important thing to realize here is that the “current” patterns of production and consumption are never current. The earthquakes of innovation and gales of creative destruction blow through any such observed patterns. The price system, with its lightning-quick information distribution mechanism is far, far superior to any elites or energy cronies. And technological innovation, though we can’t predict just how, will likely someday take us as far away from today’s energy status quo, just as we have moved away from tallow, whale oil, and horse-drawn carriages.

The Pope disagrees with our rose-tinted techno-optimism, saying “some maintain at all costs the myth of progress and say that the ecological problems will be solved simply by new technical applications.”

The Pope sits on his golden throne and looks over the vast expanse of time and space — from hunter-gatherers running mammoths off cliffs to Americans running Teslas off electric power, from the USA in 1776 and 2015, from England before and after the Industrial Revolution, from Hong Kong and Hiroshima in 1945 to their glorious present — and sneers: progress is a myth, environmental problems can’t be fixed through innovation, production is destroying the earth, consumption is original sin.

Innovation is the wellspring of all progress. Policies to stop or undo innovation in energy, chemistry, industry, farming, and genetics are a way to put humanity in a bell jar, at best. At worst they will put some of us in the dark and others in early graves. They are truly fatal conceits.

And yet, the Pope has faith in policymakers to know just which year we should have gotten off the train of innovation. William F. Buckley famously said conservatives “stand athwart history, yelling ‘Stop!’” Greens are similar, except they’re yelling “Go back!”

Therefore it has become urgent and compelling to develop policies so that in the coming years the emission of carbon dioxide and other highly polluting gases is reduced drastically, for instance by replacing fossil fuels and by developing renewable energy sources.

I reflect again on the notion that this effort might be just another way of the Church embracing and extending a competitor religion. Then again, Pope Francis so often shows that he is a true and faithful green planner. In an unholy alliance with those who see the strategic benefit in absorbing environmentalism, the Holy Father has found the perfect way to restore the power of the Church over politics, economics, culture, and the state to its former glory.

Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

Daniel Bier

Daniel Bier is the editor of Anything Peaceful. He writes on issues relating to science, civil liberties, and economic freedom.

California Government Puts Uber on Blocks by Jeffrey A. Tucker

The California Labor Commission, with its expansive power to categorize and codify what it is that workers do, has dealt a terrible blow to Uber, the disruptive ride-sharing service. In one administrative edict, it has managed to do what hundreds of local governments haven’t.

Every rapacious municipal taxi monopoly in the state has to be celebrating today. It also provides a model for how these companies will be treated at the federal level. This could be a crushing blow. It’s not only the fate of Uber that is at stake. The entire peer-to-peer economy could be damaged by these administrative edicts.

The change in how the income of Uber drivers is treated by the law seems innocuous. Instead of being regarded as “independent contractors,” they are now to be regarded as “employees.”

Why does it matter? You find out only way down in the New York Times story on the issue. This “could change Uber’s cost structure, requiring it to offer health insurance and other benefits, as well as paying salaries.”

That’s just the start of it. Suddenly, Uber drivers will be subject to a huge range of federal tax laws that involve withholding, maximum working hours, and the entire labor code at all levels as it affects the market for employees. Oh, and Obamacare.

This is a devastating turn for the company and those who drive for it.

Just ask the drivers:

Indeed, there seems to be no justification for calling Uber drivers employees. I can recall being picked up at airport once. Uber was not allowed to serve that airport. I asked the man if he worked for Uber. He said he used to but not anymore.

“When did you quit?”

“Just now,” he said. Wink, wink. He was driving for himself on my trip.

“When do you think you will work for Uber again?”

“After I drop you off.”

That’s exactly the kind of independence that Uber drivers value. They don’t have to answer any particular call that comes in. They set their own hours. They drive their own cars. When an airport bans Uber, they simply redefine themselves.

They can do this because they are their own boss; Uber only cuts them off if they don’t answer a call on their mobile apps for 180 days. But it is precisely that rule that led the commission to call them “employees.”

That’s a pretty thin basis on which to call someone an employee. And it’s also solid proof that the point of this decision is not to clarify some labor designation but rather to shore up the old monopolies that want to continue to rip off consumers with high prices and poor service. No surprise, government here is using its power to serve the ruling class and established interests.

This is exactly the problem with government regulations that purport to define and codify every job. Such regulations tend to restrict the types and speed of innovation that can occur in enterprises.

The app economy and peer-to-peer network are huge growth areas precisely because they have so far manage to evade being codified and controlled and shoe-horned into the old stultifying rules.

If everyone earning a piecemeal stream of income is called an employee — and regulated by relevant tax, workplace, and labor laws — many of these companies immediately become unviable.

There will be no more on-demand hair stylists, plumbers, tennis coaches, and piano teachers. The fate of a vast number of companies is at stake. The future is at stake.

For now, Uber is saying that this decision pertains to this one employee only. I hope that this claim is sustainable. If it is not, the regulators will use this decision to inflict a terrible blow on the brightest and fastest growing sector of American economic life.

Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.

Against Eco-pessimism: Half a Century of False Bad News by Matt Ridley

Pope Francis’s new encyclical on the environment (Laudato Sii) warns of the coming environmental catastrophe (“unprecedented destruction of ecosystems, with serious consequences for all of us”).  It’s the latest entry in a long literary tradition of environmental doomsday warnings.

In contrast, Matt Ridley, bestselling author of GenomeThe Agile Gene, and The Rational Optimist, who also received the 2012 Julian Simon Memorial Award from the Competitive Enterprise Institute, says this outlook has proven wrong time again. This is the full text of his acceptance speech. Video is embedded below.

It is now 32 years, nearly a third of a century, since Julian Simon nailed his theses to the door of the eco-pessimist church by publishing his famous article in Science magazine: “Resources, Population, Environment: An Oversupply of False Bad News.”

It is also 40 years since The Limits to Growth and 50 years since Silent Spring, plenty long enough to reflect on whether the world has conformed to Malthusian pessimism or Simonian optimism.

Before I go on, I want to remind you just how viciously Simon was attacked for saying that he thought the bad news was being exaggerated and the good news downplayed.

Verbally at least Simon’s treatment was every bit as rough as Martin Luther’s. Simon was called an imbecile, a moron, silly, ignorant, a flat-earther, a member of the far right, a Marxist.

“Could the editors have found someone to review Simon’s manuscript who had to take off his shoes to count to 20?” said Paul Ehrlich.

Erhlich together with John Holdren then launched a blistering critique, accusing Simon of lying about electricity prices having fallen. It turned out they were basing their criticism on a typo in a table, as Simon discovered by calling the table’s author. To which Ehrlich replied: “what scientist would phone the author of a standard source to make sure there were no typos in a series of numbers?”

Answer: one who likes to get his facts right.

Yet for all the invective, his critics have never laid a glove on Julian Simon then or later. I cannot think of a single significant fact, data point or even prediction where he was eventually proved badly wrong. There may be a few trivia that went wrong, but the big things are all right. Read that 1980 article again today and you will see what I mean.

I want to draw a few lessons from Julian Simon’s battle with the Malthusian minotaur, and from my own foolhardy decision to follow in his footsteps – and those of Bjorn Lomborg, Ron Bailey, Indur Goklany, Ian Murray, Myron Ebell and others – into the labyrinth a couple of decades later.

Consider the words of the publisher’s summary of The Limits to Growth: “Will this be the world that your grandchildren will thank you for? A world where industrial production has sunk to zero. Where population has suffered a catastrophic decline. Where the air, sea, and land are polluted beyond redemption. Where civilization is a distant memory. This is the world that the computer forecasts.”

Again and again Simon was right and his critics were wrong.

Would it not be nice if just one of those people who called him names piped up and admitted it? We optimists have won every intellectual argument and yet we have made no difference at all. My daughter’s textbooks trot out the same old Malthusian dirge as mine did.

What makes it so hard to get the message across?

I think it boils down to five adjectives: ahistorical, finite, static, vested and complacent. The eco-pessimist view ignores history, misunderstands finiteness, thinks statically, has a vested interest in doom and is complacent about innovation.

People have very short memories. They are not just ignoring, but unaware of, the poor track record of eco-pessimists. For me, the fact that each of the scares I mentioned above was taken very seriously at the time, attracting the solemn endorsement of the great and the good, should prompt real skepticism about global warming claims today.

That’s what motivated me to start asking to see the actual evidence about climate change. When I did so I could not find one piece of data – as opposed to a model – that shows either unprecedented change or change is that is anywhere close to causing real harm.

Yet when I made this point to a climate scientist recently, he promptly and cheerily said that “the fact that people have been wrong before does not make them wrong this time,” as if this somehow settled the matter for good.

Second, it is enormously hard for people to grasp Simon’s argument that “Incredible as it may seem at first, the term ‘finite’ is not only inappropriate but downright misleading in the context of natural resources.”

He went on: “Because we find new lodes, invent better production methods and discover new substitutes, the ultimate constraint upon our capacity to enjoy unlimited raw materials at acceptable prices is knowledge.” This is a profoundly counterintuitive point.

Yet was there ever a better demonstration of this truth than the shale gas revolution? Shale gas was always there; but what made it a resource, as opposed to not a resource, was knowledge – the practical know-how developed by George Mitchell in Texas. This has transformed the energy picture of the world.

Besides, as I have noted elsewhere, it’s the renewable – infinite – resources that have a habit of running out: whales, white pine forests, buffalo. It’s a startling fact, but no non-renewable resource has yet come close to exhaustion, whereas lots of renewable ones have.

And by the way, have you noticed something about fossil fuels – we are the only creatures that use them. What this means is that when you use oil, coal or gas, you are not competing with other species. When you use timber, or crops or tide, or hydro or even wind, you are.

There is absolutely no doubt that the world’s policy of encouraging the use of bio-energy, whether in the form of timber or ethanol, is bad for wildlife – it competes with wildlife for land, or wood or food.

Imagine a world in which we relied on crops and wood for all our energy and then along comes somebody and says here’s this stuff underground that we can use instead, so we don’t have to steal the biosphere’s lunch.

Imagine no more. That’s precisely what did happen in the industrial revolution.

Third, the Malthusian view is fundamentally static. Julian Simon’s view is fundamentally dynamic. Again and again when I argue with greens I find that they simply do not grasp the reflexive nature of the world, the way in which prices cause the substitution of resources or the dynamic properties of ecosystems – the word equilibrium has no place in ecology.

Take malaria. The eco-pessimists insisted until recently that malaria must get worse in a warming 21st century world. But, as Paul Reiter kept telling them to no avail, this is nonsense. Malaria disappeared from North America, Russia and Europe and retreated dramatically in South America, Asia and Africa in the twentieth century even as the world warmed.

That’s not because the world got less congenial to mosquitoes. It’s because we moved indoors and drained the swamps and used DDT and malaria medications and so on. Human beings are a moving target. They adapt.

But, my fourth point, another reason Simon’s argument fell on stony ground is that so many people had and have a vested interest in doom. Though they hate to admit it, the environmental movement and the scientific community are vigorous, healthy, competitive, cut-throat, free markets in which corporate leviathans compete for donations, grants, subsidies and publicity. The best way of getting all three is to sound the alarm. If it bleeds it leads. Good news is no news.

Imagine how much money you would get if you put out an advert saying: “we now think climate change will be mild and slow, none the less please donate”. The sums concerned are truly staggering. Greenpeace and WWF, the General Motors and Exxon of the green movement, between them raise and spend a billion dollars a year globally. WWF spends $68m alone on educational propaganda. Frankly, Julian, Bjorn, Ron, Indur, Ian, Myron and I are spitting in the wind.

Yet, fifth, ironically, a further problem is complacency. The eco-pessimists are the Panglossians these days, for it is they who think the world will be fine without developing new technologies. Let’s not adopt GM food – let’s stick with pesticides.

Was there ever a more complacent doctrine than the precautionary principle: don’t try anything new until you are sure it is safe? As if the world were perfect. It is we eco-optimists, ironically, who are acutely aware of how miserable this world still is and how much better we could make it – indeed how precariously dependent we are on still inventing ever more new technologies.

I had a good example of this recently debating a climate alarmist. He insisted that the risk from increasing carbon dioxide was acute and that therefore we needed to drastically cut our emissions by 90 percent or so. In vain did I try to point out that drastically cutting emissions by 90% might do more harm to the poor and the rain forest than anything the emissions themselves might do. That we are taking chemotherapy for a cold, putting a tourniquet round our neck to stop a nosebleed.

My old employer, the Economist, is fond of a version of Pascal’s wager – namely that however small the risk of catastrophic climate change, the impact could be so huge that almost any cost is worth bearing to avert it. I have been trying to persuade them that the very same logic applies to emissions reduction.

However small is the risk that emissions reduction will lead to planetary devastation, almost any price is worth paying to prevent that, including the tiny risk that carbon emissions will destabilize the climate. Just look at Haiti to understand that getting rid of fossil fuels is a huge environmental risk.

That’s what I mean by complacency: complacently assuming that we can decarbonize the economy without severe ecological harm, complacently assuming that we can shut down world trade without starving the poor, that we can grow organic crops for seven billion people without destroying the rain forest.

Having paid homage to Julian Simon’s ideas, let me end by disagreeing with him on one thing. At least I think I am disagreeing with him, but I may be wrong.

He made the argument, which was extraordinary and repulsive to me when I first heard it as a young and orthodox eco-pessimist, that the more people in the world, the more invention. That people were brains as well as mouths, solutions as well as problems. Or as somebody once put it: why is the birth of a baby a cause for concern, while the birth of a calf is a cause for hope?

Now there is a version of this argument that – for some peculiar reason – is very popular among academics, namely that the more people there are, the greater the chance that one of them will be a genius, a scientific or technological Messiah.

Occasionally, Julian Simon sounds like he is in this camp. And if he were here today, — and by Zeus, I wish he were – I would try to persuade him that this is not the point, that what counts is not how many people there are but how well they are communicating. I would tell him about the new evidence from Paleolithic Tasmania, from Mesolithic Europe from the Neolithic Pacific, and from the internet today, that it’s trade and exchange that breeds innovation, through the meeting and mating of ideas.

That the lonely inspired genius is a myth, promulgated by Nobel prizes and the patent system. This means that stupid people are just as important as clever ones; that the collective intelligence that gives us incredible improvements in living standards depends on people’s ideas meeting and mating, more than on how many people there are. That’s why a little country like Athens or Genoa or Holland can suddenly lead the world. That’s why mobile telephony and the internet has no inventor, not even Al Gore.

Not surprisingly, academics don’t like this argument. They just can’t get their pointy heads around the idea that ordinary people drive innovation just by exchanging and specializing. I am sure Julian Simon got it, but I feel he was still flirting with the outlier theory instead.

The great human adventure has barely begun. The greenest thing we can do is innovate. The most sustainable thing we can do is change. The only limit is knowledge. Thank you Julian Simon for these insights.

2012 Julian L. Simon Memorial Award Dinner from CEI Video on Vimeo.

Anything Peaceful

Anything Peaceful is FEE’s new online ideas marketplace, hosting original and aggregate content from across the Web.

AMC’s “Halt and Catch Fire” Is Capitalism’s Finest Hour by Keith Farrell

AMC’s Halt and Catch Fire is a brilliant achievement. The show is a vibrant look at the emerging personal computer industry in the early 1980s. But more than that, the show is about capitalism, creative destruction, and innovation.

While we all know the PC industry changed the world, the visionaries and creators who brought us into the information age faced uncertainty over what their efforts would yield. They risked everything to build new machines and to create shaky start-ups. Often they failed and lost all they had.

HCF has four main characters: Joe, a visionary and salesman; Cameron, an eccentric programming geek; Gordon, a misunderstood engineering genius; and Gordon’s wife, Donna, a brilliant but unappreciated housewife and engineer.

The show pits programmers, hardware engineers, investors, big businesses, corporate lawyers, venture capitalists, and competing start-ups against each other and, at times, shows them having to cooperate to overcome mutual problems. The result is innovation.

Lee Pace gives an award-worthy performance as Joe MacMillan. The son of a never-present IBM tycoon and a negligent, drug addicted mother, Joe struggles with a host of mental and emotional problems. He’s a man with a brilliant mind and an amazing vision — but he has no computer knowledge or capabilities.

The series begins with his leaving a sales job at IBM in the hope of hijacking Cardiff Electric, a small Texas-based computer company, and launching it into the personal computing game.

As part of his scheme, he gets a low-level job at Cardiff where he recruits Gordon Clark, played by the equally talented Scoot McNairy. Enamored with Gordon’s prior writings on the potential for widespread personal computer use, Joe pleads with Gordon to reverse engineer an IBM-PC with him. The plot delves into the ethical ambiguities of intellectual property law as the two spend days reverse engineering the IBM BIOS.

While the show is fiction, it is inspired in part by the real-life events of Rod Canion, co-founder of Compaq. His book, Open: How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing serves as a basis for many of the events in the show’s first season.

In 1981, when Canion and his cohorts set out to make a portable PC, the market was dominated by IBM. Because IBM had rushed their IBM-PC to market, the system was made up entirely of off-the-shelf components and other companies’ software.

As a result, it was possible to buy those same components and software and build what was known as an IBM “clone.” But these clones were only mostlycompatible with IBM. While they could run DOS, they may or may not have run other programs written for IBM-PCs.

Because IBM dominated the market, all the best software was being written for IBMs. Canion wanted to build a computer that was 100 percent IBM compatible but cheaper — and portable enough to move from desk to desk.

Canion said in an interview on the Internet History Podcast, “We didn’t want to copy their computer! We wanted to have access to the software that was written for their computer by other people.”

But in order to do that, he and his team had to reverse-engineer the IBM BIOS. They couldn’t just steal or copy the code because it was proprietary technology, but they could figure out what function the code executed and then write their own code to handle the same task.

Canion explains:

What our lawyers told us was that not only can you not use [the copyrighted code], anybody that’s even looked at it — glanced at it — could taint the whole project. … We had two software people. One guy read the code and generated the functional specifications.

So it was like reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function.

In Halt and Catch Fire, Joe uses this idea to push Cardiff into making their own PC by intentionally leaking to IBM that he and Gordon had indeed reversed engineered the BIOS. They recruit a young punk-rock programmer named Cameron Howe to write their own BIOS.

While Gordon, Cameron, and Joe all believe that they are the central piece of the plan, the truth is that they all need each other. They also need to get the bosses and investors at Cardiff on their side in order to succeed, which is hard to do after infuriating them. The show demonstrates that for an enterprise to succeed you need to have cooperation between people of varying skill sets and knowledge bases — and between capital and labor.

The series is an exploration of the chaos and creative destruction that goes into the process of innovation. The beginning of the first episode explains the show’s title:

HALT AND CATCH FIRE (HCF): An early computer command that sent the machine into a race condition, forcing all instructions to compete for superiority at once. Control of the computer could be regained.

The show takes this theme of racing for superiority to several levels: the characters, the industry, and finally the economy and the world as a whole.

As Gordon himself declares of the cut-throat environment in which computer innovation occurs, “It’s capitalism at its finest!” HFC depicts Randian heroes: businessmen, entrepreneurs, and creators fight against all odds in a race to change the world.

Now into its second season, the show is exploring the beginnings of the internet, and Cameron is running her own start-up company, Mutiny. I could go on about the outstanding production quality, but the real novelty here is a show where capitalists, entrepreneurs, and titans of industry are regarded as heroic.

Halt and Catch Fire is a brilliant show, but it isn’t wildly popular. I fear it may soon be canceled, so be sure to check it out while it’s still around.

Keith Farrell

Keith Farrell is a freelance writer and political commentator.

Nevada Passes Universal School Choice by Max Borders

People are becoming more conscious about animal welfare. The livestock, they say, shouldn’t be confined to factory farms — five by five — in such horrible conditions. These beings should be given more freedom to roam and to develop in a more natural way. They’re treated as mere chattel for the assembly line. It’s inhumane to keep them like this, they say — day after day, often in deplorable conditions.

Unfortunately, only a minority extends this kind of consciousness to human children. But that minority is growing, apparently.

Nevada is changing everything. According to the NRO,

Nevada governor Brian Sandoval [recently] signed into law the nation’s first universal school-choice program. That in and of itself is groundbreaking: The state has created an option open to every single public-school student.

Even better, this option improves upon the traditional voucher model, coming in the form of an education savings account (ESA) that parents control and can use to fully customize their children’s education.

Yes, school choice has often advanced through the introduction of vouchers and charter schools — which remain some of the most important reforms for breaking up the government education monopoly.

But vouchers were, to quote researcher Matthew Ladner, “the rotary telephones of our movement — an awesome technology that did one amazing thing.” States such as Nevada (and Arizona, Florida, Mississippi, and Tennessee) have implemented the iPhone of choice programs. They “still do that one thing well, but they also do a lot of other things,” Ladner notes.

So what’s the deal? What do parents and kids actually get out of this?

As of next year, parents in Nevada can have 90 percent (100 percent for children with special needs and children from low-income families) of the funds that would have been spent on their child in their public school deposited into a restricted-use spending account. That amounts to between $5,100 and $5,700 annually, according to the Friedman Foundation for Educational Choice.

Those funds are deposited quarterly onto a debit card, which parents can use to pay for a variety of education-related services and products — things such as private-school tuition, online learning, special-education services and therapies, books, tutors, and dual-enrollment college courses.

It’s an à la carte education, and the menu of options will be as hearty as the supply-side response — which, as it is whenever markets replace monopolies, is likely to be robust.

This is big news. Not merely because it is the most ambitious school choice measure yet passed, but also because it represents a very real opportunity to demonstrate the power of competitive forces to unleash entrepreneurship and innovation in the service of children.

When we compare such a bold measure to the status quo, it’s pretty groundbreaking. So it’s probably not the time to quibble about the ideological purity of such a policy.

But we should seriously consider the concerns of those who advocate full privatization, as opposed to tax and voucher reform.

Here are three things to keep an eye on:

  1. Nevadans have to remain vigilant that this doesn’t become an entree for regulators and incumbent crony schools to jack up the prices and mute the very market forces that will liberate teachers and kids.
    In other words, you don’t want to see what happened to health care (and, to some extent, higher education) happen to private education, just as low-income students finally have a chance to escape government-run schools.
  2. Nevadans have to ensure that cost spirals don’t infect the system due to cross-subsidy. This is what happened to the university system.
  3. Nevadans have to capitalize on the wiggle room quickly, by fundamentally disrupting the education market in such a profound way that it wards off the specter of those who are waiting to seize it back from parents and children.
    This can have spillover effects into other states, too, due to innovation and copycat entrepreneurship. (It might also attract a lot of parents to the state.)

Such alterations to the status quo should be welcome news to those who understand that freedom is not some ideal sitting atop Mt. Utopia.

This is a weak joint and a leverage point to unleash creative, tech-propelled market forces in a space that has been dominated by politics and unions and stifling bureaucracy.

There will be battles ahead on this front. But Nevada’s change is certainly cause for cautious celebration.

Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

How Ice Cream Won the Cold War by B.K. Marcus

Richard Nixon stood by a lemon-yellow refrigerator in Moscow and bragged to the Soviet leader: “The American system,” he told Nikita Khrushchev over frosted cupcakes and chocolate layer cake, “is designed to take advantage of new inventions.”

It was the opening day of the American National Exhibition at Sokol’niki Park, and Nixon was representing not just the US government but also the latest products from General Mills, Whirlpool, and General Electric. Assisting him in what would come to be known as the “Kitchen Debates” were attractive American spokesmodels who demonstrated for the Russian crowd the best that capitalism in 1959 had to offer.

Capitalist lifestyle

“This was the first time,” writes British food historian Bee Wilson of the summer exhibition, that “many Russians had encountered the American lifestyle firsthand: the first time they … set eyes on big American refrigerators.”

Laughing and sometimes jabbing fingers at one another, the two men debated the merits of capitalism and communism. Which country had the more advanced technologies? Which way of life was better? The conversation … hinged not on weapons or the space race but on washing machines and kitchen gadgets. (Consider the Fork)

Khrushchev was dismissive. Yes, the Americans had brought some fancy machines with them, but did all this consumer technology actually offer any real advantages?

In his memoirs, he later recalled picking up an automatic lemon squeezer. “What a silly thing … Mr. Nixon! … I think it would take a housewife longer to use this gadget than it would for her to … slice a piece of lemon, drop it into a glass of tea, then squeeze a few drops.”

Producing necessities

That same year, Khrushchev announced that the Soviet economy would overtake the United States in the production of milk, meat, and butter. These were products that made sense to him. He couldn’t deliver — although Soviet farmers were forced to slaughter their breeding herds in an attempt to do so — but the goal itself reveals what the communist leader believed a healthy economy was supposed to do: produce staples like meat and dairy, not luxuries like colorful kitchenware and complex gadgetry for the decadent and lazy.

“Don’t you have a machine,” he asked Nixon, “that puts food in the mouth and presses it down? Many things you’ve shown us are interesting but they are not needed in life. They have no useful purpose. They are merely gadgets.”

Khrushchev was displaying the behavior Ludwig von Mises described in The Anti-Capitalistic Mentality. “They castigate the luxury, the stupidity and the moral corruption of the exploiting classes,” Mises wrote of the socialists. “In their eyes everything that is bad and ridiculous is bourgeois, and everything that is good and sublime is proletarian.”

On display that summer in Moscow was American consumer tech at its most bourgeois. The problem with “castigating the luxury,” as Mises pointed out, is that all “innovation is first a luxury of only a few people, until by degrees it comes into the reach of the many.”

Producing luxuries

It is appropriate that the Kitchen Debate over luxury versus necessity took place among high-end American refrigerators. Refrigeration, as a luxury, is ancient. “There were ice harvests in China before the first millennium BC,” writes Wilson. “Snow was sold in Athens beginning in the fifth century BC. Aristocrats of the seventeenth century spooned desserts from ice bowls, drank wine chilled with snow, and even ate iced creams and water ices. Yet it was only in the nineteenth century in the United States that ice became an industrial commodity.” Only with modern capitalism, in other words, does the luxury reach so rapidly beyond a tiny elite.

“Capitalism,” Mises wrote in Economic Freedom and Interventionism, “is essentially mass production for the satisfaction of the wants of the masses.”

The man responsible for bringing ice to the overheated multitude was a Boston businessman named Frederic Tudor. “History now knows him as ‘the Ice King,’” Steven Johnson writes of Tudor in How We Got to Now: Six Innovations That Made the Modern World, “but for most of his early adulthood he was an abject failure, albeit one with remarkable tenacity.”

Like many wealthy families in northern climes, the Tudors stored blocks of frozen lake water in icehouses, two-hundred-pound ice cubes that would remain marvelously unmelted until the hot summer months arrived, and a new ritual began: chipping off slices from the blocks to freshen drinks [and] make ice cream.

In 1800, when Frederic was 17, he accompanied his ill older brother to Cuba. They were hoping the tropical climate would improve his brother’s health, but it “had the opposite effect: arriving in Havana, the Tudor brothers were quickly overwhelmed by the muggy weather.” They reversed course, but the summer heat chased them back to the American South, and Frederic longed for the cooler climes of New England. That experience “suggested a radical — some would say preposterous — idea to young Frederic Tudor: if he could somehow transport ice from the frozen north to the West Indies, there would be an immense market for it.”

“In a country where at some seasons of the year the heat is almost unsupportable,” Tudor wrote in his journal, “ice must be considered as outdoing most other luxuries.”

Tudor’s folly

Imagine what an early 19th-century version of Khrushchev would have said to the future Ice King. People throughout the world go hungry, and you, Mr. Tudor, want to introduce frozen desserts to the tropics? What of beef? What of butter? The capitalists chase profits rather than producing the necessities.

It’s true that Tudor was pursuing profits, but his idea of ice outdoing “most other luxuries” looked to his contemporaries more like chasing folly than fortune.

The Boston Gazette reported on one of his first shiploads of New England ice: “No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.”

And at first the skeptics seemed right. Tudor “did manage to make some ice cream,” Johnson tells us. And that impressed a few of the locals. “But the trip was ultimately a complete failure.” The novelty of imported ice was just too novel. Why supply ice where there was simply no demand?

You can’t put a price on failure

In the early 20th century, economists Ludwig von Mises and F.A. Hayek, after years of debate with the Marxists, finally began to convince advocates of socialist central planning that market prices were essential to the rational allocation of scarce resources. Some socialist theorists responded with the idea of using capitalist market prices as a starting point for the central planners, who could then simulate the process of bidding for goods, thereby replacing real markets with an imitation that they believed would be just as good. Capitalism would then be obsolete, an unfortunate stage in the development of greater social justice.

By 1959, Khrushchev could claim, however questionably, that Soviet refrigerators were just as good as the American variety — except for a few frivolous features. But there wouldn’t have been any Soviet fridges at all if America hadn’t led the way in artificial refrigeration, starting with Tudor’s folly a century and a half earlier. If the central planners had been around in 1806 when the Boston Gazette poked fun at Tudor’s slippery speculation, what prices would they have used as the starting point for future innovation? All the smart money was in other ventures, and Tudor was on his way to losing his family’s fortune and landing in debtor’s prison.

Only through stubborn persistence did Tudor refine his idea and continue to innovate while demand slowly grew for what he had to offer.

“Still pursued by his creditors,” Johnson writes, Tudor

began making regular shipments to a state-of-the-art icehouse he had built in Havana, where an appetite for ice cream had been slowly maturing. Fifteen years after his original hunch, Tudor’s ice trade had finally turned a profit. By the 1820s, he had icehouses packed with frozen New England water all over the American South. By the 1830s, his ships were sailing to Rio and Bombay. (India would ultimately prove to be his most lucrative market.)

The world the Ice King made

In the winter of 1846–47, Henry David Thoreau watched a crew of Tudor’s ice cutters at work on Walden Pond.

Thoreau wrote, “The sweltering inhabitants of Charleston and New Orleans, of Madras and Bombay and Calcutta, drink at my well.… The pure Walden water is mingled with the sacred water of the Ganges.”

When Tudor died in 1864, Johnson tells us, he “had amassed a fortune worth more than $200 million in today’s dollars.”

The Ice King had also changed the fortunes of all Americans, and reshaped the country in the process. Khrushchev would later care about butter and beef, but before refrigerated train cars — originally cooled by natural ice — it didn’t matter how much meat and dairy an area could produce if it could only be consumed locally without spoiling. And only with the advent of the home icebox could families keep such products fresh. Artificial refrigeration created the modern city by allowing distant farms to feed the growing urban populations.

A hundred years after the Boston Gazette reported what turned out to be Tudor’s failed speculation, the New York Times would run a very different headline: “Ice Up to 40 Cents and a Famine in Sight”:

Not in sixteen years has New York faced such an iceless prospect as this year. In 1890 there was a great deal of trouble and the whole country had to be scoured for ice. Since then, however, the needs for ice have grown vastly, and a famine is a much more serious matter now than it was then.

“In less than a century,” Johnson observes, “ice had gone from a curiosity to a luxury to a necessity.”

The world that luxury made

Before modern markets, Mises tells us, the delay between luxury and necessity could take centuries, but “from its beginnings, capitalism displayed the tendency to shorten this time lag and finally to eliminate it almost entirely. This is not a merely accidental feature of capitalistic production; it is inherent in its very nature.” That’s why everyone today carries a smartphone — and in a couple of years, almost every wrist will bear a smartwatch.

The Cold War is over, and Khrushchev is no longer around to scoff, but the Kitchen Debate continues as the most visible commercial innovations produce “mere gadgets.” Less visible is the steady progress in the necessities, including the innovations we didn’t know were necessary because we weren’t imagining the future they would bring about. Even less evident are all the failures. We talk of profits, but losses drive innovation forward, too.

It’s easy to admire the advances that so clearly improve lives: ever lower infant mortality, ever greater nutrition, fewer dying from deadly diseases. It’s harder to see that the larger system of innovation is built on the quest for comfort, for entertainment, for what often looks like decadence. But the long view reveals that an innovator’s immediate goals don’t matter as much as the system that promotes innovation in the first place.

Even if we give Khrushchev the benefit of the doubt and assume that he really did care about feeding the masses and satisfying the most basic human needs, it’s clear the Soviet premier had no idea how economic development works. Progress is not driven by producing ever more butter; it is driven by ice cream.

B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

Microaggressions and Microwonders: Are mountains out of molehills proof the world’s getting better? by Steven Horwitz

A recurring theme of recent human history is that the less of something bad we see in the world around us, the more outrage we generate about the remaining bits.

For example, in the 19th century, outrage about child labor grew as the frequency of child labor was shrinking. Economic forces, not legislation, had raised adult wages to a level at which more and more families did not need additional income from children to survive, and children gradually withdrew from the labor force. As more families enjoyed having their children at home or in school longer, they became less tolerant of those families whose situations did not allow them that luxury, and the result was the various moral crusades, and then laws, against child labor.

We have seen the same process at work with cigarette smoking in the United States. As smoking has declined over the last generation or two, we have become ever less tolerant of those who continue to smoke. Today, that outrage continues in the form of new laws against vaping and e-cigarettes.

The ongoing debate over “rape culture” is another manifestation of this phenomenon. During the time that reasonably reliable statistics on rape in the United States have been collected, rape has never been less frequent than it is now, and it is certainly not as institutionalized as a practice in the Western world as it was in the past. Yet despite this decline — or in fact because of it — our outrage at the rape that remains has never been higher.

The talk of the problem of “microaggressions” seems to follow this same pattern. The term refers to the variety of verbal and nonverbal forms of communication that are said to constitute disrespect for particular groups, especially those who have been historically marginalized. So, for example, the use of exclusively masculine pronouns might be construed as a “microaggression” against women, or saying “ladies and gentlemen” might be seen as a microaggression against transsexuals. The way men take up more physical space on a train or bus, or the use of the phrase “walk-only zones” (which might offend the wheelchair-bound) to describe pedestrian crossways, are other examples.

Those who see themselves as the targets of microaggressions have often become very effective entrepreneurs of outrage in trying to parlay these perceived slights into indications of much more pervasive problems of sexism or racism and the like. Though each microaggression individually might not seem like much, they add up. So goes the argument.

I don’t want to totally dismiss the underlying point here, as it is certainly true that people say and do things (often unintentionally) that others will find demeaning, but I do want to note how this cultural phenomenon fits the pattern identified above. We live in a society in which the races and genders (and classes!) have never been more equal. Really profound racism and sexism is far less prominent today than it was 50 or 100 years ago. In a country where the president is a man of color and where one of our richest entertainers is a woman of color, it’s hard to argue that there hasn’t been significant progress.

But it is exactly that progress that leads to the outrage over microaggressions. Having steadily pushed back the more overt and damaging forms of inequality, and having stigmatized them as morally offensive, we have less tolerance for the smaller bits that remain. As a result, we take small behaviors that are often completely unintended as offenses and attempt to magnify them into the moral equivalent of past racism or sexism. Even the co-opting of the word “aggression” to describe what is, in almost all cases, behavior that is completely lacking in actual aggression is an attempt to magnify the moral significance of those behaviors.

Even if we admit that some of such behaviors may well reflect various forms of animus, there are two problems with the focus on microaggressions.

First, where do we draw the line? Once these sorts of behaviors are seen as slights with the moral weight of racism or sexism, we can expect to see anyone and everyone who feels slighted about anything someone else said or did declare it a “microaggression” and thereby try to capture the same moral high ground.

We are seeing this already, especially on college campuses, where even the mere discussion of controversial ideas that might make some groups uncomfortable is being declared to be a microaggression. In some cases this situation is leading faculty to stop teaching anything beyond the bland.

Second, moral equivalence arguments can easily backfire. For example, if we, as some feminists were trying to do in the 1980s, treat pornography as the equivalent of rape, hoping to make porn look worse, we might end up causing people to treat real physical rape less seriously given that they think porn is largely harmless.

So it goes with microaggressions: if we try to raise men taking up too much room on a bus seat into a serious example of sexism, then we risk people reacting by saying, “Well, if that’s what sexism is, then why should I really worry too much about sexism?” The danger is that when far more troubling examples of sexism or racism appear (for example, the incarceration rates of African-American men), we might be inclined to treat them less seriously.

It is tempting to want to flip the script on the entrepreneurs of microaggression outrages and start to celebrate their outrages as evidence of how far we’ve come. If men who take the middle armrest on airplanes (as obnoxious as that might be) are a major example of gender inequality, we have come far indeed. But as real examples of sexism and racism and the like do still exist, I’d prefer another strategy to respond to the talk of microaggressions.

Let’s spend more time celebrating the “microwonders” of the modern world. Just as microaggression talk magnifies the small pockets of inequality left and seems to forget the larger story of social progress, so does our focus on large social and economic problems in general cause us to forget the larger story of progress that is often manifested in tiny ways.

We live in the future that prior generations only imagined. We have the libraries of the world in our pockets. We have ways of easily connecting with friends and strangers across the world. We can have goods and even services of higher quality and lower cost, often tailored to our particular desires, delivered to our door with a few clicks of a button. We have medical advances that make our lives better in all kinds of small ways. We have access to a variety of food year-round that no king in history had. The Internet brings us happiness every day through the ability to watch numerous moments of humor, human triumph, and joy.

Even as we recognize that the focus on microaggressions means we have not yet eliminated every last trace of inequality, we should also recognize that it means we’ve come very far. And we should not hesitate to celebrate the microwonders of progress that often get overlooked in our laudable desire to continue to repair an imperfect world.

Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

Capitalism Defused the Population Bomb by Chelsea German

Journalists know that alarmism attracts readers. An article in the British newspaper the Independent titled, “Have we reached ‘peak food’? Shortages loom as global production rates slow” claimed humanity will soon face mass starvation.

Just as Paul Ehrlich’s 1968 bestseller The Population Bomb  predicted that millions would die due to food shortages in the 1970s and 1980s, the article in 2015 tries to capture readers’ interest through unfounded fear. Let’s take a look at the actual state of global food production.

The alarmists cite statistics showing that while we continue to produce more and more food every year, the rate of acceleration is slowing down slightly. The article then presumes that if the rate of food production growth slows, then widespread starvation is inevitable.

This is misleading. Let us take a look at the global trend in net food production, per person, measured in 2004-2006 international dollars. Here you can see that even taking population growth into account, food production per person is actually increasing:

Food is becoming cheaper, too. As K.O. Fuglie and S. L. Wang showed in their 2012 article “New Evidence Points to Robust but Uneven Productivity Growth in Global Agriculture,” food prices have been declining for over a century, in spite of a recent uptick:

In fact, people are better nourished today than they ever have been, even in poor countries. Consider how caloric consumption in India increased despite population growth:

Given that food is more plentiful than ever, what perpetuates the mistaken idea that mass hunger is looming? The failure to realize that human innovation, through advancing technology and the free market, will continue to rise to meet the challenges of growing food demand.

In the words of Advisory Board member Matt Ridley, “If 6.7 billion people continue to keep specializing and exchanging and innovating, there’s no reason at all why we can’t overcome whatever problems face us.”

This idea first appeared at

Decentralization: Why Dumb Networks Are Better

The smart choice is innovation at the edge by ANDREAS ANTONOPOULOS…

“Every device employed to bolster individual freedom must have as its chief purpose the impairment of the absoluteness of power.” — Eric Hoffer

In computer and communications networks, decentralization leads to faster innovation, greater openness, and lower cost. Decentralization creates the conditions for competition and diversity in the services the network provides.

But how can you tell if a network is decentralized, and what makes it more likely to be decentralized? Network “intelligence” is the characteristic that differentiates centralized from decentralized networks — but in a way that is surprising and counterintuitive.

Some networks are “smart.” They offer sophisticated services that can be delivered to very simple end-user devices on the “edge” of the network. Other networks are “dumb” — they offer only a very basic service and require that the end-user devices are intelligent. What’s smart about dumb networks is that they push innovation to the edge, giving end-users control over the pace and direction of innovation. Simplicity at the center allows for complexity at the edge, which fosters the vast decentralization of services.

Surprisingly, then, “dumb” networks are the smart choice for innovation and freedom.

The telephone network used to be a smart network supporting dumb devices (telephones). All the intelligence in the telephone network and all the services were contained in the phone company’s switching buildings. The telephone on the consumer’s kitchen table was little more than a speaker and a microphone. Even the most advanced touch-tone telephones were still pretty simple devices, depending entirely on the network services they could “request” through beeping the right tones.

In a smart network like that, there is no room for innovation at the edge. Sure, you can make a phone look like a cheeseburger or a banana, but you can’t change the services it offers. The services depend entirely on the central switches owned by the phone company. Centralized innovation means slow innovation. It also means innovation directed by the goals of a single company. As a result, anything that doesn’t seem to fit the vision of the company that owns the network is rejected or even actively fought.

In fact, until 1968, AT&T restricted the devices allowed on the network to a handful of approved devices. In 1968, in a landmark decision, the FCC ruled in favor of the Carterfone, an acoustic coupler device for connecting two-way radios to telephones, opening the door for any consumer device that didn’t “cause harm to the system.”

That ruling paved the way for the answering machine, the fax machine, and the modem. But even with the ability to connect smarter devices to the edge, it wasn’t until the modem that innovation really accelerated. The modem represented a complete inversion of the architecture: all the intelligence was moved to the edge, and the phone network was used only as an underlying “dumb” network to carry the data.

Did the telecommunications companies welcome this development? Of course not! They fought it for nearly a decade, using regulation, lobbying, and legal threats against the new competition. In some countries, modem calls across international lines were automatically disconnected to prevent competition in the lucrative long-distance market. In the end, the Internet won. Now, almost the entire phone network runs as an app on top of the Internet.

The Internet is a dumb network, which is its defining and most valuable feature. The Internet’s protocol (transmission control protocol/Internet protocol, or TCP/IP) doesn’t offer “services.” It doesn’t make decisions about content. It doesn’t distinguish between photos and text, video and audio. It doesn’t have a list of approved applications. It doesn’t even distinguish between client and server, user and host, or individual versus corporation. Every IP address is an equal peer.

TCP/IP acts as an efficient pipeline, moving data from one point to another. Over time, it has had some minor adjustments to offer some differentiated “quality of service” capabilities, but other than that, it remains, for the most part, a dumb data pipeline. Almost all the intelligence is on the edge — all the services, all the applications are created on the edge-devices. Creating a new application does not involve changing the network. The Web, voice, video, and social media were all created as applications on the edge without any need to modify the Internet protocol.

So the dumb network becomes a platform for independent innovation, without permission, at the edge. The result is an incredible range of innovations, carried out at an even more incredible pace. People interested in even the tiniest of niche applications can create them on the edge. Applications that only have two participants only need two devices to support them, and they can run on the Internet. Contrast that to the telephone network where a new “service,” like caller ID, had to be built and deployed on every company switch, incurring maintenance cost for every subscriber. So only the most popular, profitable, and widely used services got deployed.

The financial services industry is built on top of many highly specialized and service-specific networks. Most of these are layered atop the Internet, but they are architected as closed, centralized, and “smart” networks with limited intelligence on the edge.

Take, for example, the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the international wire transfer network. The consortium behind SWIFT has built a closed network of member banks that offers specific services: secure messages, mostly payment orders. Only banks can be members, and the network services are highly centralized.

The SWIFT network is just one of dozens of single-purpose, tightly controlled, and closed networks offered to financial services companies such as banks, brokerage firms, and exchanges. All these networks mediate the services by interposing the service provider between the “users,” and they allow minimal innovation or differentiation at the edge — that is, they are smart networks serving mostly dumb devices.

Bitcoin is the Internet of money. It offers a basic dumb network that connects peers from anywhere in the world. The bitcoin network itself does not define any financial services or applications. It doesn’t require membership registration or identification. It doesn’t control the types of devices or applications that can live on its edge. Bitcoin offers one service: securely time-stamped scripted transactions. Everything else is built on the edge-devices as an application. Bitcoin allows any application to be developed independently, without permission, on the edge of the network. A developer can create a new application using the transactional service as a platform and deploy it on any device. Even niche applications with few users — applications never envisioned by the bitcoin protocol creator — can be built and deployed.

Almost any network architecture can be inverted. You can build a closed network on top of an open network or vice versa, although it is easier to centralize than to decentralize. The modem inverted the phone network, giving us the Internet. The banks have built closed network systems on top of the decentralized Internet. Now bitcoin provides an open network platform for financial services on top of the open and decentralized Internet. The financial services built on top of bitcoin are themselves open because they are not “services” delivered by the network; they are “apps” running on top of the network. This arrangement opens a market for applications, putting the end user in a position of power to choose the right application without restrictions.

What happens when an industry transitions from using one or more “smart” and centralized networks to using a common, decentralized, open, and dumb network? A tsunami of innovation that was pent up for decades is suddenly released. All the applications that could never get permission in the closed network can now be developed and deployed without permission. At first, this change involves reinventing the previously centralized services with new and open decentralized alternatives. We saw that with the Internet, as traditional telecommunications services were reinvented with email, instant messaging, and video calls.

This first wave is also characterized by disintermediation — the removal of entire layers of intermediaries who are no longer necessary. With the Internet, this meant replacing brokers, classified ads publishers, real estate agents, car salespeople, and many others with search engines and online direct markets. In the financial industry, bitcoin will create a similar wave of disintermediation by making clearinghouses, exchanges, and wire transfer services obsolete. The big difference is that some of these disintermediated layers are multibillion dollar industries that are no longer needed.

Beyond the first wave of innovation, which simply replaces existing services, is another wave that begins to build the applications that were impossible with the previous centralized network. The second wave doesn’t just create applications that compare to existing services; it spawns new industries on the basis of applications that were previously too expensive or too difficult to scale. By eliminating friction in payments, bitcoin doesn’t just make better payments; it introduces market mechanisms and price discovery to economic activities that were too small or inefficient under the previous cost structure.

We used to think “smart” networks would deliver the most value, but making the network “dumb” enabled a massive wave of innovation. Intelligence at the edge brings choice, freedom, and experimentation without permission. In networks, “dumb” is better.


Andreas M. Antonopoulos is a technologist and serial entrepreneur who advises companies on the use of technology and decentralized digital currencies such as bitcoin.

Star Trek’s “Infinite Diversity” and the Endless Frontier

Spock understood the importance of innovation for life and prosperity by RICHARD LORENC …

Last Friday, millions of Star Trek fans were saddened by the news that Leonard Nimoy, the actor who played the iconic character Spock on the series, had died at the age of 83 after a brief hospitalization.

I was among the multitude on social media who paid tribute to Nimoy by posting pictures, sayings, videos, and eulogies in remembrance of the man who brought “Live long and prosper” to the world.

The classic Vulcan farewell is not the only thoughtful gift from Nimoy and Spock. Another idea shared by the quintessential Vulcan was his people’s concept of “Infinite Diversity in Infinite Combinations,” or IDIC.

IDIC was the Vulcans’ subdued, yet profound, appreciation for diversity. They wore pendants representing IDIC and posted it like a religious icon in their homes, temples, and starships. It became the de facto symbol of the Vulcans and their intensely logical ways. It was as if they were saying, “Difference is essential to the universe, and we’ve seen far less than actually exists. We’ll never see the end of it – and that’s a good thing.”

That idea didn’t always sit well with space cowboys Kirk and McCoy, who wanted more concrete answers. But then humans are illogical. What else could Spock expect?

Like Star Trek generally, IDIC had a big impact on me. It’s an idea that still motivates and delights me when I think of the possibilities for humanity today, and particularly the opportunities for difference and diversity offered by markets.

If you view the market process as one of discovery – discovering new ways to combine old ideas, and imagining how to apply those ideas in service to others – you can see how it begins to reveal IDIC. With nothing holding back individuals’ creative energies, there’s no telling what orders and ideas might emerge, and there’s no end in sight to the frontiers of social and economic innovation.

The next time you walk a city street and gawk at the skyscrapers, or wander a supermarket and marvel at fresh strawberries in the winter, or gaze through a glowing box to see friends across the planet, take a moment to remember IDIC. Because of it, for the first time in history, our species truly can “live long and prosper.”

It’s fascinating – but it’s only logical.


Richard N. Lorenc is the Chief Operating Officer of FEE.

The Garage That Couldn’t Be Boarded Up Uber and the jitney … everything old is new again by SARAH SKWIRE

August Wilson. Jitney. 1979.

Last December, I used Uber for the first time. I downloaded the app onto my phone, entered my name, location, and credit card number, and told them where my daughters and I needed to go. The driver picked us up at my home five minutes later. I was able to access reviews that other riders had written for the same driver, to see a photograph of him and of the car that he would be using to pick me up, and to pay and tip him without juggling cash and credit cards and my two kids. Like nearly everyone else I know, I instantly became a fan of this fantastic new invention.

In January, I read Thomas Sowell’s Knowledge and Decisions for the first time. In chapter 8, Sowell discusses the early 20th-century rise of “owner operated bus or taxi services costing five cents and therefore called ‘jitneys,’ the current slang for nickels.” Sowell takes his fuller description of jitneys from transportation economist George W. Hilton’s “American Transportation Planning.”

The jitneys … essentially provided a competitive market in urban transportation with the usual characteristics of rapid entry and exit, quick adaptation to changes in demand, and, in particular,  excellent adaptation to peak load demands. Some 60 percent of the jitneymen were part-time operators, many of whom simply carried passengers for a nickel on trips between home and work.

It sounded strangely familiar.

In February, I read August Wilson’s play, Jitney, written in 1979, about a jitney car service operating in Pittsburgh in the 1970s. As we watch the individual drivers deal with their often tumultuous personal relationships, we also hear about their passengers. The jitney drivers take people to work, to the grocery store, to the pawnshop, to the bus station, and on a host of other unspecified errands. They are an integral part of the community. Like the drivers in Sean Malone’s documentary No Van’s Land, they provide targeted transportation services to a neighborhood under served by public transportation. We see the drivers in Jitney take pride in the way they fit into and take care of their community.

If we gonna be running jitneys out of here we gonna do it right.… I want all the cars inspected. The people got a right if you hauling them around in your car to expect the brakes to work. Clean out your trunk. Clean out the interior of your car. Keep your car clean. The people want to ride in a clean car. We providing a service to the community. We ain’t just giving rides to people. We providing a service.

That service is threatened when the urban planners and improvers at the Pittsburgh Renewal Council decide to board up the garage out of which the jitney service operates and much of the surrounding neighborhood. The drivers are skeptical that the improvements will ever really happen.

Turnbo: They supposed to build a new hospital down there on Logan Street. They been talking about that for the longest while. They supposed to build another part of the Irene Kaufman Settlement House to replace the part they tore down. They supposed to build some houses down on Dinwidee.

Becker: Turnbo’s right. They supposed to build some houses but you ain’t gonna see that. You ain’t gonna see nothing but the tear-down. That’s all I ever seen.

The drivers resolve, in the end, to call a lawyer and refuse to be boarded up. “We gonna run jitneys out of here till the day before the bulldozer come. Ain’t gonna be no boarding up around here! We gonna fight them on that.” They know that continuing to operate will allow other neighborhood businesses to stay open as well. They know that the choice they are offered is not between an improved neighborhood and an unimproved one, but between an unimproved neighborhood and no neighborhood at all. They know that their jitney service keeps their neighborhood running and that it improves the lives of their friends and neighbors in a way that boarded up buildings and perpetually incomplete urban planning projects never will.

Reading Sowell’s book and Wilson’s play in such close proximity got me thinking. Uber isn’t a fantastic new idea. It’s a fantastic old idea that has returned because the omnipresence of smartphones has made running a jitney service easier and more effective. Uber drivers and other ride-sharing services, as we have all read and as No Van’s Land demonstrates so effectively, are subject to protests and interference by competitors, to punitive regulation from local governments, and to a host of other challenges to their enterprise. This push back is nothing new. Sowell notes, “The jitneys were put down in every American city to protect the street railways and, in particular, to perpetuate the cross-subsidization of the street railways’ city-wide fare structures.”

Despite these common problems, Uber and other 21st-century jitney drivers do not face the major challenge that the drivers in Jitney do. They do not need to operate from a centralized location with a phone. Now that we all have phones in our pockets, the Uber “garage” is everywhere. It can’t be boarded up.


 Sarah Skwire is a fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.

Bitcoin Technology: A Festival of the Commons

Open-source currencies create new property paradigms by ANDREAS ANTONOPOULOS:

Open-source technologies such as bitcoin are a combination of open-source software, common technology standards, and a participatory decentralized network. These layers create a three-tiered commons where innovation contributed by users adds to the common platform, which makes it better for everyone.

But for the last few hundred years, we have generally thought of goods as best belonging to the private domain. Consider that, in economic terms, the “tragedy of the commons” is a market-failure scenario where a shared public good is overexploited. In this scenario, each user has an incentive to maximize his or her own use until the good is depleted.

The example used to illustrate this economic theory is a grassland (a “village commons” in British English) that is unregulated and overgrazed by cattle until it deteriorates to a muddy field. The tragedy of the commons occurs when individual self-interest combined with a large economic externality (the cost to the commons) create a market failure for all.

The opposite of the tragedy of the commons is called a “comedy of the commons,” but I prefer to use the term “festival of the commons,” which conjures a better visual example: a grassland used to hold a community festival that benefits everyone. The comedy of the commons was first stipulated as an economic theory governing public goods such as knowledge, where individual use of the common good does not deplete the good but instead adds to it.

The sharing economy, which consists of open-source software (for example, Linux), participatory publishing (Wikipedia), and participatory networks (BitTorrent), creates conditions where increased participation adds to the good’s underlying value and benefits all participants. In such cases, the underlying good is knowledge, software, or a network, and its availability is not depleted by individual use.

Software applications are themselves open-sourced and add to the commons, offering new capabilities for all subsequent innovators. Enhancements to the protocol bring new features across the entire network, allowing the ecosystem to build new services around them. Finally, as more users adopt the technology and add their resources to the P2P network, the scalability and security of the entire network increases.

Open-source currencies have another layer that multiplies these underlying effects: the currency itself. Not only is the investment in infrastructure and innovation shared by all, but the shared benefit may also manifest in increased value for the common currency. Currency is the quintessential shared good, because its value correlates strongly to the economic activity that it enables. In simple terms, a currency is valuable because many people use it, and the more who use it, the more valuable it becomes. Unlike national currencies, which are generally restricted to use within a country’s borders, digital currencies like bitcoin are global and can therefore be readily adopted and used by almost any user who is part of the networked global society.

The underlying festival-of-the-commons effect created by open-source software, shared protocols, and P2P networks feeds into the value of the overlaid shared currency. While this effect may be obscured in the early stages of adoption by speculation and high volatility, in the long run, it may create a virtuous cycle of adoption and value that become a true festival of the commons.

The festival is now open. Who will join it?


Andreas M. Antonopoulos is a technologist and serial entrepreneur who advises companies on the use of technology and decentralized digital currencies such as bitcoin.

The New Frontier: Peer technologies are enabling self-government in the cloud by Max Borders

Today there is no territory left to settle, but human freedom is about to enjoy a renaissance.

Imagine we’re standing on a ridge. We look out on a valley awash in sunlight — surveyors contemplating a new city. We squint and ask: What will it look like? Will it have its own rules, culture, and commercial life? Will it be a bustling metropolis or a constellation of villages?

The Internet has only been with us for about 20 years. If the Northwest Ordinance and the Homestead Acts were legal sanction for expansion across the American continent, networking technologies are invitations for people both to spread out and to connect with others in novel ways. This opportunity has important implications.

For much of history, we have thought of the law and the land as being inseparable, particularly as the conquerors were so often the lawgivers. Not anymore. For the first time, jurisdiction and territory can be separated to a great degree thanks to innovation.

So many of the administrative functions of jurisdiction can increasingly be found in the cloud. It’s early, yes. The network is fragile. But we will soon be able to pass in and out of legal systems, selecting those that benefit us, employing true self-government. It is time to follow Thoreau, who in Civil Disobedience asked, “Is a democracy, such as we know it, the last improvement possible in government?”

Already, we can buy and sell using cryptocurrencies such as bitcoin. We can take Lyft downtown, bypassing obsolete local ordinances on the way. Google and Apple are selling us privacy again. These are just the first brushfires of a new form of social coordination in which technology itself makes it possible to upgrade our social operating systems.

Peer-to-peer interaction means we’re a nation of joiners again — on steroids. It seemed for a while we had lost the republic to special interests. But the hopeless calculus of cronyism — concentrated benefits and dispersed costs — is being flipped on its head. Internetworking makes it so we’re enjoying the fruits of the sharing economy — quite rapidly, in fact. Cronies and officials are finding it hard to play catch up.

New constituencies are forming around these new benefits. Special interests that once squeaked to get oil are confronted by battalions bearing smart phones. Citizens are voting more with their dollars and their devices, fed up with leaving prayers in the voting booth. Free association is now ensured by design, not by statute.

Technology that changes the incentives can change the institutions. The rules and regulations we currently live under came out of our democratic operating system (DOS). It used to be that these institutions shaped our incentives to a great degree. Now we have ways of coordinating our activities that go right around state intermediaries, corporate parasites, and moribund laws.

The incentives for social change are strong, so strong that the gales of creative destruction can finally blow apart much of the state apparatus, which seemed impervious to reform. And that’s a good thing for a self-governing people.

That celebrated old historian Frederick Jackson Turner summed up his famous treatise on the American West, agreeing — perhaps despite himself — that the people of the frontier had been moving away from hierarchy:

In spite of environment, and in spite of custom, each frontier did indeed furnish a new field of opportunity, a gate of escape from the bondage of the past; and freshness, and confidence, and scorn of older society, impatience of its restraints and its ideas, and indifference to its lessons, have accompanied the frontier.

But in 1893, as Turner wrote that passage, the frontier had already closed.

Today, the seekers and strivers have reopened the frontier, no longer a peculiarly American terrain. It’s a space beyond nation or territory — without end and without the need for Caesar’s imprimatur. As people start to gather there, there will be every form of vice, as in the past. But there will also be rapid advance and innovative wonders. Everything will be subject to continuous trial, error, and revision. And paradoxically, that infinite space in which we can spread out and try new things allows us to be closer than ever before.

We’re becoming cultural cosmopolitans, radical communitarians, and standard bearers for a right of exit. Most importantly, we’re freer than ever before. As my colleague Jeffrey Tucker writes on the workers’ revolution, “This whole approach might be considered a very advanced stage of capitalism in which third parties exercise ever less power over who can and cannot participate.”

In this infinite space, there will be little room for political progressives with big plans. They’ll find it difficult to impose hierarchy on the new frontier folk who will run among network nodes. The progressive program, as such, will dwindle down to what Steven B. Johnson calls “peer progressivism.”

Rejecting the dirigisme of today’s progressives, Johnson writes:

We don’t think that everything in modern life should be re-engineered to follow the “logic of the Internet.” We just think that society has long benefited from non-market forms of open collaboration, and that there aren’t enough voices in the current political conversation reminding us of those benefits.

Tocqueville couldn’t have said it any better. If such becomes the sum of tomorrow’s progressivism, we might all be headed for a great convergence, where once we were as stark and separate as red and blue.


Max Borders is the editor of The Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

How Far Can the P2P Revolution Go? Will the sharing economy replace the State? by Jeffrey A. Tucker

How far can the peer-to-peer revolution be pushed? It’s time we start to speculate, because history is moving fast. We need to dislodge from our minds our embedded sense of what’s possible.

Right now, we can experience a form of commercial relationship that was unknown just a decade ago. If you need a ride in a major city, you can pull up the smartphone app for Uber or Lyft and have a car arrive in minutes. It’s amazing to users because they get their first taste of what consumer service in taxis really feels like. It’s luxury at a reasonable price.

If your sink is leaking, you can click TaskRabbit. If you need a place to stay, you can count on Airbnb. In Manhattan, you can depend on WunWun to deliver just about anything to your door, from toothpaste to a new desktop computer. If you have a skill and need a job, or need to hire someone, you can go to oDesk or eLance and post a job you can do or a job you need done. If you grow food or make great local dishes, you can post at a place like and find a prepaid customer base.

These are the technologies of the peer-to-peer or sharing economy. You can be a producer, a consumer, or both. It’s a different model — one characterized by the word “equipotency,” meaning that the power to buy and sell is widely distributed throughout the population. It’s made possible through technology.

The emergence of the app economy — an emergent order not created by government or legislation — has enabled these developments, and they are changing the world.

These technologies are not temporary. They cannot and will not be uninvented. On the contrary, they will continue to develop and expand in both sophistication and in geographic relevance. This is what happens when technology is especially useful. Whether it is the horseshoe of the Middle Ages or the distributed networks of our time, when an innovation so dramatically improves our lives, it changes the course of history. This is what is happening in our time.

The applications of these P2P networks are enormously surprising. The biggest surprise in my own lifetime is how they have been employed to make payment systems P2P — no longer based on third-party trust — through what’s called the blockchain. The blockchain can commodify and title any bundle of information and make it transferable, with timestamps, in a way that cannot be forged, all at nearly zero cost.

An offshoot of blockchain-distributed technology has been the invention of a private currency. For half a century, it has been a dream of theorists who saw that taking money out of government hands would do more for prosperity and peace than any single other step.

The theorists dreamed, but they didn’t have the tools. They hadn’t been invented yet. Now that the tools exist, the result is bitcoin, which gives rise to the hope that we have the makings of a new international currency managed entirely by the private sector and the global market system.

These new P2P systems have connected the world like never before. They hold out the prospect of unleashing unprecedented human energy and the creativity that comes with it. They give billions of people a chance to integrate themselves into the worldwide division of labor from which they have thus far been excluded.

With 3-D printing and computer-aided design files distributed on digital networks, more people have access to become their own manufacturers. These same people can be designers and distribute the results to the world. Such a system cuts out every barrier that stands between people and their material aspirations — barriers such as product regulation, patents, and excise taxes.

It’s time that we begin to expect the unexpected. What else is possible?

Entrepreneurs are already experimenting with an Uber model of delivering some form of health care online. In some areas, they will bring a nurse to you to give you a flu shot. Other health services are on the way, causing some to speculate on the return to at-home medical visits paid out of pocket (rather than via insurance).

What does this innovation do for centralist solutions like Obamacare? It changes the entire dynamic of service provision. The medical establishment is already protesting that this consumer-based, one-off service approach runs contrary to primary and preventive care — a critique that fails to consider that there is no reason why P2P technology can’t provide such care.

How much can things change? To what extent will they affect the structure of our political lives? This is where matters get really interesting. A feature of P2P is the gradual elimination of third parties as agents who stand between individuals and their desire to cooperate one to one. We use such third parties because we believe we need them. Credit card companies serve a need. Banks serve a need. Large-scale corporations serve a need.

One theory holds that the State exists to do for us what we can’t do for ourselves. It’s the ultimate third-party provider. We elect people to serve as our representatives, and they bring our voices to the business of government so that we can get the services we want. That’s the idea, anyway.

But once government gets the power to do things, it expands its power in the interest of the ruling elite. The taxicab monopoly was no more necessary than the government postal service, but the growth of P2P technology has increasingly exposed the reality of how unnecessary the State as a third-party mediator really is. The post office is being pushed into obsolescence. It’s hard to see how the municipal taxi monopoly can survive a competitive contest with P2P technology systems.

Policing is an example of a service that people think is absolutely necessary. The old perception is that government needs to provide this service because most people cannot do it for themselves. But what if policing, too, could employ P2P technology?

What if, when there is a threat, whether to you or to others, you could open an app on your phone and call the private police immediately? You can imagine how such a technology could learn to filter out static and discern threat level based on algorithms and immediately supplied video evidence. We already see the first attempts in this direction with the Peacekeeper app.

Rather than a tax-funded system that has become a threat to the innocent as much as the guilty, we would have a system rooted in consumer service. It might be similar to the private security systems used by all businesses today, except it would apply to individuals. It would survive not through taxation but subscription — voluntary and noncoercive.

How much further can we take this? Can courts and laws themselves be ported to the online world, using the blockchain for verifying contracts, managing conflicts, and even issuing securities? The large is experimenting with this idea — not for ideological reasons but simply because such systems work better.

And here we find the most compelling case for optimism for the cause of human liberty. These technologies are emerging from within the private sector, not from government. They work better to serve human needs than the public-sector alternative. Their use and their growth depend not on ideological conversion but on their capacity to serve universal human needs.

The ground really is shifting beneath our feet, despite all odds. It is still an age of leviathan. But based on technology and the incredible creativity of entrepreneurs, that leviathan no longer seems like a permanent feature of the world.

20121129_JeffreyTuckeravatarABOUT JEFFREY A. TUCKER

Jeffrey Tucker is a distinguished fellow at FEE, CLO of the startup, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events.