Tag Archive for: Technology

New York’s Taxi Cartel Is Collapsing — Now They Want a Bailout! by Jeffrey A. Tucker

An age-old rap against free markets is that they give rise to monopolies that use their power to exploit consumers, crush upstarts, and stifle innovation. It was this perception that led to “trust busting” a century ago, and continues to drive the monopoly-hunting policy at the Federal Trade Commission and the Justice Department.

But if you look around at the real world, you find something different. The actually existing monopolies that do these bad things are created not by markets but by government policy. Think of sectors like education, mail, courts, money, or municipal taxis, and you find a reality that is the opposite of the caricature: public policy creates monopolies while markets bust them.

For generations, economists and some political figures have been trying to bring competition to these sectors, but with limited success. The case of taxis makes the point. There is no way to justify the policies that keep these cartels protected. And yet they persist — or, at least, they have persisted until very recently.

In New York, we are seeing a collapse as inexorable as the fall of the Soviet Union itself. The app economy introduced competition in a surreptitious way. It invited people to sign up to drive people here and there and get paid for it. No more standing in lines on corners or being forced to split fares. You can stay in the coffee shop until you are notified that your car is there.

In less than one year, we’ve seen the astonishing effects. Not only has the price of taxi medallions fallen dramatically from a peak of $1 million, it’s not even clear that there is a market remaining at all for these permits. There hasn’t been a single medallion sale in four months. They are on the verge of becoming scrap metal or collector’s items destined for eBay.

What economists, politicians, lobbyists, writers, and agitators failed to accomplished for many decades, a clever innovation has achieved in just a few years of pushing. No one on the planet could have predicted this collapse just five years ago. Now it is a living fact.

Reason TV does a fantastic job and covering what’s going on with taxis in New York. Now if this model can be applied to all other government-created monopolies, we might see genuine progress toward a truly competitive economy. After all, it turns out that the free market is the best anti-monopoly weapon ever developed.

Jeffrey A. Tucker
Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.

Video Game Developers Face the Final Boss: The FDA by Aaron Tao

As I drove to work the other day, I heard a very interesting segment on NPR that featured a startup designing video games to improve cognitive skills and relieve symptoms associated with a myriad of mental health conditions.

One game, Project Evo, has shown good preliminary results in training players to ignore distractions and stay focused on the task at hand:

“We’ve been through eight or nine completed clinical trials, in all cognitive disorders: ADHD, autism, depression,” says Matt Omernick, executive creative director at Akili, the Northern California startup that’s developing the game.

Omernick worked at Lucas Arts for years, making Star Wars games, where players attack their enemies with light sabers. Now, he’s working on Project Evo. It’s a total switch in mission, from dreaming up best-sellers for the commercial market to designing games to treat mental health conditions.

“The qualities of a good video game, things that hook you, what makes the brain — snap — engage and go, could be a perfect vessel for actually delivering medicine,” he says.

In fact, the creators believe their game will be so effective it might one day reduce or replace the drugs kids take for ADHD.

This all sounds very promising.

In recent years, many observers (myself included) have expressed deep concerns that we are living in the “medication generation,” as defined by the rapidly increasing numbers of young people (which seems to have extended to toddlers and infants!) taking psychotropic drugs.

As experts and laypersons continue to debate the long-term effects of these substances, the news of intrepid entrepreneurs creating non-pharmaceutical alternatives to treat mental health problems is definitely a welcome development.

But a formidable final boss stands in the way:

[B]efore they can deliver their game to players, they first have to go through the Food and Drug Administration — the FDA.

The NPR story goes on to detail on how navigating the FDA’s bureaucratic labyrinth is akin to the long-grinding campaign required to clear the final dungeon from any Legend of Zelda game. Pharmaceutical companies are intimately familiar with the FDA’s slow and expensive approval process for new drugs, and for this reason, it should come as no surprise that Silicon Valley companies do their best to avoid government regulation. One venture capitalist goes so far as to say, “If it says ‘FDA approval needed’ in the business plan, I myself scream in fear and run away.”

Dynamic, nimble startups are much more in tune with market conditions than the ever-growing regulatory behemoth that is defined by procedure, conformity, and irresponsibility. As a result, conflict between these two worlds is inevitable:

Most startups can bring a new video game to market in six months. Going through the FDA approval process for medical devices could take three or four years — and cost millions of dollars.

In the tech world, where app updates and software patches are part of every company’s daily routine just to keep up with consumer habits, technology can become outdated in the blink of an eye. Regulatory hold on a product can spell a death sentence for any startup seeking to stay ahead of its fierce market competition.

Akili is the latest victim to get caught in the tendrils of the administrative state, and worst of all, in the FDA, which distinguished political economist Robert Higgs has described as “one of the most powerful of federal regulatory agencies, if not the most powerful.” The agency’s awesome authority extends to over twenty-five percent of all consumer goods in the United States and thus “routinely makes decisions that seal the fates of millions.”

Despite its perceived image as the nation’s benevolent guardian of health and well-being, the FDA’s actual track record is anything but, and its failures have been extensively documented in a vast economic literature.

The “knowledge problem” has foiled the whims of central planners and social engineers in every setting, and the FDA is not immune. By taking a one-sized-fits-all approach in enacting regulatory policy, it fails to take into account the individual preferences, social circumstances, and physiological attributes of the people that compose a diverse society.

For example, people vary widely in their responses to drugs, depending on variables that range from dosage to genetic makeup. In a field as complex as human health, an institution forcing its way on a population is bound to cause problems (for a particularly egregious example, see what happened with the field of nutrition).

The thalidomide tragedy of the 1960s is usually cited as to why we need a centralized, regulatory agency staffed by altruistic public servants to keep the market from being flooded by toxins, snake oils, and other harmful substances. However, this needs to be weighed against the costs of keeping beneficial products withheld.

For example, the FDA’s delay of beta blockers, which were widely available in Europe to reduce heart attacks, was estimated to have cost tens of thousands of lives. Despite this infamous episode and other repeated failures, the agency cannot overcome the institutional incentives it faces as a government bureaucracy. These factors strongly skew its officials towards avoiding risk and getting blamed for visible harm. Here’s how the late Milton Friedman summarized the dilemma with his usual wit and eloquence:

Put yourself in the position of a FDA bureaucrat considering whether to approve a new, proposed drug. There are two kinds of mistakes you can make from the point of view of the public interest. You can make the mistake of approving a drug that turns out to have very harmful side effects. That’s one mistake. That will harm the public. Or you can make the mistake of not approving a drug that would have very beneficial effects. That’s also harmful to the public.

If you’re such a bureaucrat, what’s going to be the effect on you of those two mistakes? If you make a mistake and approve a product that has harmful side effects, you are a devil incarnate. Your misdeed will be spread on the front page of every newspaper. Your name will be mud. You will get the blame. If you fail to approve a drug that might save lives, the people who would object to that are mostly going to be dead. You’re not going to hear from them.

Critics of America’s dysfunctional healthcare system have pointed out the significant role of third-party spending in driving up prices, and how federal and state regulations have created perverse incentives and suppressed the functioning of normal market forces.

In regard to government restrictions on the supply of medical goods, the FDA deserves special blame for driving up the costs of drugsslowing innovation, and denying treatment to the terminally ill while demonstrating no competency in product safety.

Going back to the NPR story, a Pfizer representative was quoted in saying that “game designers should go through the same FDA tests and trials as drug manufacturers.”

Those familiar with the well-known phenomenon of regulatory capture and the basics of public choice theory should not be surprised by this attitude. Existing industries, with their legions of lobbyists, come to dominate the regulatory apparatus and learn to manipulate the system to their advantage, at the expense of new entrants.

Akili and other startups hoping to challenge the status quo would have to run past the gauntlet set up by the “complex leviathan of interdependent cartels” that makes up the American healthcare system. I can only wish them the best, and hope Schumpeterian creative destruction eventually sweeps the whole field of medicine.

Abolishing the FDA and eliminating its too-often abused power to withhold innovative medical treatments from patients and providers would be one step toward genuine healthcare reform.

A version of this post first appeared at The Beacon.

Aaron Tao
Aaron Tao

Aaron Tao is the Marketing Coordinator and Assistant Editor of The Beacon at the Independent Institute. Follow him on Twitter here.

Will Robots Put Everyone Out of Work? by Sandy Ikeda

Will workplace automation make the rich richer and doom the poor?

That could happen soon, warns Paul Solman, economics correspondent for PBS NewsHour. He’s talking to Jerry Kaplan, author of a new book that seems to combine Luddism with fears about inequality.

PAUL SOLMAN: And the age-old fear of displaced workers, says Kaplan, is finally, irrevocably upon us.

JERRY KAPLAN: What happens to people who simply can’t acquire or don’t have the skills that are going to be needed in the new economy?

PAUL SOLMAN: Well, what is going to happen to them?

JERRY KAPLAN: We’re going to see much worse income inequality. And unless we take some humanitarian actions, the truth is, they’re going to starve and live in poverty and then die.

PAUL SOLMAN: Kaplan offers that grim prognosis in a new book, Humans Need Not Apply. He knows, of course, that automation has been replacing labor for 200 years or more, for decades, eliminating relatively high-paying factory jobs in America, and that new jobs have more than kept pace, but not anymore, he says.

I haven’t read Kaplan’s book, but you can get a sense of the issue from this video.

The  fear is that, unlike the past when displaced workers could learn new skills for a different industry, advanced “thinking machines” will soon fill even highly skilled positions, making it that much harder to find a job that pays a decent wage. And while the Luddite argument assumes that the number of jobs in an economy is fixed, the fear now is that whatever jobs may be created will simply be filled by even smarter machines.

This new spin sounds different, but it’s essentially the same old Luddite fallacy on two levels. First, while it’s true that machinery frequently substitutes for labor in the short term, automation tends to complement labor in the long term; and, second, the primary purpose of markets is not to create jobs per se, it is to create successful ventures by satisfying human wants and needs.

While I understand that Kaplan offers some market-oriented solutions, the mainstream media has emphasized the more alarmist aspects of his thesis. The Solmans of the world would like the government to respond with regulations to slow or prevent the introduction of artificial intelligence — or to at least subsidize the kind of major labor-force adjustments that such changes appear to demand.

Short-Term Substitutes, Long-Term Complements

Fortunately, Henry Hazlitt long ago worked out in a clear, careful, and sympathetic way the consequences of innovations on employment in his classic book, Economics in One Lesson. Here’s a brief outline of the chapter relevant to our discussion, “The Curse of Machinery”:

(As Hazlitt notes, not all innovations are “labor-saving.” Many simply improve the quality of output, but let’s put that to one side. Let’s also put aside the very real problem that raising the minimum wage will artificially accelerate the trend toward automation.)

Suppose a person who owns a coat-making business invests in a new machine that makes the same number of coats with half the workers. (Assume for now that all employees work eight-hour days and earn the going wage.) What’s easy to see is that, say, 50 people are laid off; what’s harder to see is that other people will be hired to build that new machine. If the new machine does reduce the business’s cost, however, then presumably it takes fewer than 50 people to build it. If it takes, say, 30 people, there still appears to be a net loss of 20 jobs overall.

But the story doesn’t end there. Assuming the owner doesn’t lower her price for the coats she sells, Hazlitt notes that there are three things she can do with the resulting profit. She can use it to invest in her own business, to invest in some other business, or to spend on consumption goods for herself and others. Whichever she does means more production and thus more employment elsewhere.

Moreover, competition in the coat industry will likely lead her rivals to adopt the labor-saving machinery and to produce more coats. Buying more machines means more employment in the machine-making industry, and producing more coats will, other things equal, lower the price of coats.

Now, buying more machines will probably mean she has to hire more workers to operate or maintain them, and lower coat prices mean that consumers will have more disposable income to spend on goods in general, including coats.

The overall effect is to increase the demand for labor and the number of jobs, which conforms to our historical experience in many industries. So, if all you see are the 50 people initially laid off, well, you’ve missed most of the story.

Despite claims to the contrary, it’s really no different in the case of artificial intelligence.

Machines might substitute for labor in the short term, but in the long term they complement labor and increase its productivity. Yes, new machines used in production will be more sophisticated and do more things than the old ones, but that shouldn’t be surprising; that’s what new machines have done throughout history.

And as I’ve written before in “The Breezes of Creative Destruction,” it usually takes several years for an innovation — even something as currently ubiquitous as smartphones — to permeate an economy. (I would guess that we each could name several people who don’t own one.) This gives people time to adjust by moving, learning new skills, and making new connections. Hazlitt recognizes that not everyone will adjust fully to the new situation, perhaps because of age or disability. He responds,

It is altogether proper — it is, in fact, essential to a full understanding of the problem — that the plight of these groups be recognized, that they be dealt with sympathetically, and that we try to see whether some of the gains from this specialized progress cannot be used to help the victims find a productive role elsewhere.

I’m pretty sure Hazlitt means that voluntary, noncoercive actions and organizations should take the lead in filling this compassionate role.

In any case, what works at the level of a single industry also works across all industries. The same processes that Hazlitt describes will operate as long as markets are left free to adjust. Using government intervention to deliberately stifle change may save the jobs we see, but it will destroy the many more jobs that we don’t see — and worse.

More Jobs, Less Work, Greater Well-Being

Being able to contribute to making one’s own living is probably essential to human happiness. And economic development has indeed meant that we’ve been spending less time working.

Although it’s hard to calculate accurately how many hours per week our ancestors worked — and some claim that people in preindustrial society had more leisure time than industrial workers — the best estimate is that the work week in the United States fell from about 70 hours in 1850 to about 40 hours today. Has this been a bad thing? Has working less led to human misery? Given the track record of relatively free markets, that’s a strange question to ask.

Take, for example, this video by Swedish doctor Hans Rosling about his mother’s washing machine. It’s a wonderful explanation of how this particular machine, sophisticated for its day, enabled his mother to read to him, which helped him to then become a successful scientist.

I had lunch with someone who was recently laid off and whose husband has a fulfilling but low-paying job. Despite this relatively low family income, she was able to fly to New York for a weekend to attend a U2 concert, take a class at an upscale yoga studio in Manhattan, and share a vegan lunch with an old friend. Our grandparents would have been dumbfounded!

As British journalist Matt Ridley puts it in his book The Rational Optimist,

Innovation changes the world but only because it aids the elaboration of the division of labor and encourages the division of time. Forget wars, religions, famines and poems for the moment. This is history’s greatest theme: the metastasis of exchange, specialization and the invention it has called forth, the “creation” of time.

The great accomplishment of the free market is not that it creates jobs (which it does) but that it gives us the time to promote our well-being and to accomplish things no one thought possible.

If using robots raises the productivity of labor, increases output, and expands the amount, quality, and variety of goods each of us can consume — and also lowers the hours we have to work — what’s wrong with that? What’s wrong with working less and having the time to promote the well-being of ourselves and of others?

In a system where people are free to innovate and to adjust to innovation, there will always be enough jobs for whoever wants one; we just won’t need to work as hard in them.

Sandy Ikeda
Sandy Ikeda

Sandy Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism.

Environmental Doom-mongering and the Myth of Vanishing Resources by Chelsea German

Media outlets ranging from Newsweek and Time, to National Geographic and even the Weather Channel, all recently ran articles on the so-called “Overshoot Day,” which is defined by its official website as the day of the year

When humanity’s annual demand for the goods and services that our land and seas can provide — fruits and vegetables, meat, fish, wood, cotton for clothing, and carbon dioxide absorption — exceeds what Earth’s ecosystems can renew in a year.

This year, the world allegedly reached the Overshoot Day on August 13th. Overshoot Day’s proponents claim that, having used up our ecological “budget” for the year and entered into “deficit spending,” all consumption after August 13th is unsustainable.

Let’s look at the data concerning resources that, according to Overshoot Day’s definition, we are consuming unsustainably. (We’ll leave aside carbon dioxide absorption — as that issue is more complex — and focus on all the other resources).

Fruits and vegetables

Since millions of people rose from extreme poverty and starvation over the past few decades, the world is consuming more fruits and vegetables than before. We are also producing more fruits and vegetables per person than before. That is, partly, because of increasing yields, which allow us to extract more food from less land. Consider vegetable yields:

Meat and fish

As people in developing countries grow richer, they consume more protein (i.e., meat). The supply of meat and fish per person is rising to meet the increased demand, just as with fruits and vegetables. Overall dietary supply adequacy is, therefore, increasing.

Wood

It is true that the world is losing forest area, but there is cause for optimism. The United States has more forest area today than it did in 1990.

As Ronald Bailey says in his new book The End of Doom, “In fact, except in the cases of India and Brazil, globally the forests of the world have increased by about 2 percent since 1990.”

As the people of India and Brazil grow wealthier and as new forest-sparing technologies spread, those countries will likely follow suit. To quote Jesse H. Ausubel:

Fortunately, the twentieth century witnessed the start of a “Great Restoration” of the world’s forests. Efficient farmers and foresters are learning to spare forestland by growing more food and fiber in ever-smaller areas. Meanwhile, increased use of metals, plastics, and electricity has eased the need for timber. And recycling has cut the amount of virgin wood pulped into paper.

Although the size and wealth of the human population has shot up, the area of farm and forestland that must be dedicated to feed, heat, and house this population is shrinking. Slowly, trees can return to the liberated land.

Cotton

Cotton yields are also increasing — as is the case with so many other crops. Not only does this mean that we will not “run out” of cotton (as the Overshoot Day proponents might have you believe), but it also means consumers can buy cheaper clothing.

Please consider the graph below, showing U.S. cotton yields rising and cotton prices falling.

While it is true that humankind is consuming more, innovations such as GMOs and synthetic fertilizers are also allowing us to produce more. Predictions of natural resource depletion are not new.

Consider the famous bet between the environmentalist Paul Ehrlich and economist Julian Simon: Ehrlich bet that the prices of five essential metals would rise as the metals became scarcer, exhausted by the needs of a growing population. Simon bet that human ingenuity would rise to the challenge of growing demand, and that the metals would decrease in price over time. Simon and human ingenuity won in the end. (Later, the prices of many metals and minerals did increase, as rapidly developing countries drove up demand, but those prices are starting to come back down again).

To date, humankind has never exhausted a single natural resource. To learn more about why predictions of doom are often exaggerated, consider watching Cato’s recent book forum, The End of Doom.

A version of this post first appeared at Cato.org.

Chelsea German

Chelsea German

Chelsea German works at the Cato Institute as a Researcher and Managing Editor of HumanProgress.org.

RELATED ARTICLE: EPA’s Hightest Paid Employee, “Climate Change Expert,” Sentenced to 32 Months for Fraud, Says Lying Was a ‘Rush’

How Minimum Wages Discourage Entrepreneurship by Donald J. Boudreaux

In a letter to the Wall Street Journal, Brian Collins asks, “Do you truly believe that absent any increase in the minimum wage that Wendy’s or any other business will suspend efforts to develop and implement new forms of automation that promise to reduce staff levels?”

The answer is “no.” Contrary to Mr. Collins’s implication, however, this fact does nothing to excuse raising the minimum wage.

Even in a world in which market forces naturally promote automation, raising the minimum wage has two pernicious effects.

First, it causes the rate of automation to be faster than it would be if the minimum wage were not raised. That is, raising the minimum wage results in automation being introduced at a rate that is too fast given the size of the low-skilled labor force.

Second, raising the minimum wage destroys incentives for entrepreneurs and businesses to find ways to profitably employ workers whose limited skills prevent them from producing hourly outputs valued at least as high as the minimum wage.

The first effect throws some low-skilled workers out of jobs that they would otherwise retain, while the second effect ensures that no one has incentives to find ways to profitably employ these and other low-skilled workers.

If it is inhumane to outlaw the profitable employment of those workers whose skills are the least valuable, then the minimum wage is deeply inhumane.

If the government instituted a minimum wage of $100 per hour and, therefore, made unlawful the profitable employment of all those people whose skills are too meager to enable them to produce at least $100 worth of output per hour, there would be a national uproar — and rightly so.

Yet when the government implements such a policy but in a way that outlaws the profitable employment only of people whose skill-sets are among thelowest, relatively few people object and many people — especially “Progressives” — applaud the policy as humane.

How sad. And how especially sad that many economists today, who above all should know better, lend their authority to such an inhumane policy.

A version of this letter first appeared at Café Hayek.

Donald J. Boudreaux
Donald J. Boudreaux

Donald Boudreaux is a professor of economics at George Mason University, a former FEE president, and the author of Hypocrites and Half-Wits.

Capitalists from Outer Space by B.K. Marcus

When the aliens stop trifling with crop circles, bumpkin abduction, and indelicate probes and finally introduce themselves to the rest of humanity, will they turn out to be partisans of central planning, interventionism, or unhampered markets?

This is not the question asked by the Search for Extraterrestrial Intelligence (SETI) Institute, but whether or not the institute’s scientists realize it, the answer is crucial to their search.

Signs of Intelligent Life

The SETI Institute was founded by Frank Drake and the late Carl Sagan. Its scientists do not believe we have been visited yet. UFO sightings and abduction stories don’t stand up under scientific scrutiny, they say. Nor are they waiting for flying saucers. Because the aliens’ signals will likely reach Earth before their spaceships do, SETI monitors the skies for transmissions from advanced civilizations orbiting distant stars.

The scientific search for evidence of advanced alien societies began in 1960, when Drake aimed a 25-meter dish at two nearby stars. The previous year, the journal Nature had published an article called “Searching for Interstellar Communications,” which suggested that distant civilizations might transmit greetings at the same wavelength as the radio emission of hydrogen (the universe’s most common element). Drake found no such signals, nor has SETI found any evidence of interstellar salutations since. But it’s not giving up.

The Truth Is Out There

Before we can ask after advanced alien political economy, we must confront the more basic question: Is there anybody out there? SETI has been searching for over half a century. That may seem like a long time, but there are, as Sagan underscored, “billions and billions of stars.” How many of them should we expect to monitor before finding one that’s transmitting?

In an attempt to address, if not answer, the question, Drake proposed an equation in 1961 to summarize the concepts scientists think are relevant to any educated guess.

Here is how Sagan explains the Drake equation in the book Cosmos:

N*, the number of stars in the Milky Way Galaxy;
fp, the fraction of stars that have planetary systems;
ne, the number of planets in a given system that are ecologically suitable for life;
fl, the fraction of otherwise suitable planets on which life actually arises;
fi, the fraction of inhabited planets on which an intelligent form of life evolves;
fc, the fraction of planets inhabited by intelligent beings on which a communicative technical civilization develops;
and fL, the fraction of a planetary lifetime graced by a technical civilization.

The End of the World as We Know It

Sagan expounds on all the terms in the equation, but it’s that last one that absorbs him: How long can an advanced civilization last before it destroys itself?

Perhaps civilizations arise repeatedly, inexorably, on innumerable planets in the Milky Way, but are generally unstable; so all but a tiny fraction are unable to survive their technology and succumb to greed and ignorance, pollution and nuclear war.

Sagan wrote Cosmos toward the end of the Cold War. He mentioned other threats — greed, ignorance, pollution — but the specter of mutual annihilation haunted him. When he imagined the end of an advanced society, he pictured something permanent.

“It is hardly out of the question,” he wrote, “that we might destroy ourselves tomorrow.” Perhaps, Sagan feared, the general pattern is for civilizations to “take billions of years of tortuous evolution to arise, and then snuff themselves out in an instant of unforgivable neglect.”

The Rise and Fall of Civilization

We cannot know if the civilizational survival rate on other planets is high or low, and so the final term in the Drake equation is guesswork, but some guesses are better than others.

“One of the great virtues of [Drake’s] equation,” Sagan wrote, “is that it involves subjects ranging from stellar and planetary astronomy to organic chemistry, evolutionary biology, history, politics and abnormal psychology.”

That’s quite an array of topics to inform an educated guess, but notice that he doesn’t mention economics.

Perhaps he thought politics covered it, but Sagan’s political focus was more on questions of war and peace than poverty and wealth. In particular, he considered the end of civilization to be an event from which it would take a planet billions of years to recover.

The history of our own species suggests that this view is too narrow. Yes, a nuclear war could wipe out humanity, but civilizations do destroy themselves in less permanent ways.

There have been two dark ages in Western history: the Mycenaean-Greek and the post-Roman. Both were marked by retrogression in technology, art, and literacy. Both saw a drop in overall population and in population density, as survivors left towns and cities for a more autarkic existence in the countryside. And both underwent a radical decline in foreign trade and the division of labor. Market societies deteriorated into disparate cultures of subsistence farming.

The ultimate causes of the Greek Dark Age are a mystery. As with the later fall of the Roman Empire, the Mycenaean demise was marked by “barbarian” invasions, but the hungry hoards weren’t new: successful invasions depend on weakened defenses and deteriorating infrastructure. What we know is that worsening poverty marked the fall, whether as cause, effect, or both.

The reasons for the fall of the Roman West are more evident, if still debated. Despite claims of lead poisoning, poor sanitation, too much religion, too little religion, and even, believe it or not, inadequate central planning, the empire’s decline resulted from bad economic policy.

To help us see this more clearly,Freeman writer Nicholas Davidson suggests in his magnificent 1987 article “The Ancient Suicide of the West” that we look to the signs of cultural and economic decline rather than to the changes, however drastic, in political leadership. While the Western empire did not fall to the barbarians until the fifth century AD, “The Roman economy [had] reached its peak toward the middle of the first century AD and thereafter began to decline.” As with the Mycenaean Greeks, the decay was evident in art and literature, science and technology. Civilization cannot advance in poverty. Wealth and civilization progress together.

How to Kill Progress

“The stagnation in all aspects of society,” Davidson writes, “was associated with a continuous extension of governmental functions. Social engineering was tried on the grand scale. The state relentlessly expanded into commerce, industry, and private life.”

As we look to our own future — or anticipate the politics of our alien brethren — we can draw on the experience of humanity’s past to help us appreciate the economics of progress and decline. Over and over, we see the same pattern: some group gains a temporary benefit from a world in flux. When further social and economic changes check those advantages, the old guard turn to the state to protect them from the dynamism of a healthy society. Adaptation is stymied. Nothing is allowed to evolve. The politically privileged — military and civilian, rich and poor — sacrifice their civilization in an doomed attempt to ward off change.

The Sustainable Society

Evolutionary science, economic theory, and cybernetics yield the same lessons: stability requires flexibility; complexity flourishes under spontaneous order; centralization leads to stagnation.

To those general lessons, economics adds insights specific to the context of scarcity: private property and voluntary exchange produce greater general wealth, longer time horizons, and ever more investment in the “luxuries” of scientific investigation, technological innovation, and a more active stewardship of the environment. Trade promotes peace, and a global division of labor unites the world’s cultures in mutual self-interest.

If, as Sagan contends, an advanced civilization would require political stability and sizable long-term investment in science and technology to survive an interstellar spacefaring phase, then we should expect any such civilization to embrace a planetwide system of free trade and free markets grounded in private property. For the civilization to last the centuries and millennia necessary to explore and colonize the stars, its governing institutions will have to be minimal and decentralized.

The aliens will, in short, embrace what Adam Smith called “the system of natural liberty.” Behind their transmissions, SETI should expect to find the invisible hand.

Scientists versus Freedom

When we do make contact, “the consequences for our own civilization will be stunning,” Sagan wrote. Humanity will gain “insights on alien science and technology, art, music, politics, ethics, philosophy and religion…. We will know what else is possible.”

What did Sagan himself believe possible? Had he survived to witness first contact, would he be surprised to learn of the capitalist political economy at the foundation of an advanced extraterrestrial civilization?

Neil deGrasse Tyson, who remade the Cosmos television series for the 21st century, recommends reading Adam Smith’s Wealth of Nations but only “to learn that capitalism is an economy of greed, a force of nature unto itself.”

We shouldn’t assume that Tyson represents Sagan’s economic views, but when Sagan did address questions of policy, he advocated a larger welfare state and greater government spending. When he talked about “us” and “our” responsibilities, he invariably meant governments, not private individuals.

Sagan wrote, “It may be that civilizations can be divided into two great categories: one in which the scientists are unable to convince nonscientists to authorize a search for extraplanetary intelligence … and another category in which the grand vision of contact with other civilizations is shared widely.”

Why would scientists have to persuade anyone else to authorize anything? Sagan could only imagine science funded by government. It was apparently beyond credibility that less widely shared visions can secure sufficient funding.

It’s a safe guess, then, that when he talks of civilizations that are “unable to survive their technology and succumb to greed,” Sagan is talking about the profit motive.

And yet, it is the profit motive that drives innovation, and it is the great wealth generated by profit seekers that allows later generations of innovators to pursue their visions with fewer financial inducements. Whether directly or indirectly, profits pay for progress.

Self-Interested Enlightenment

Why does it matter if astronomers misunderstand the market? Does SETI really need to appreciate the virtues of individual liberty to monitor the heavens for signs of intelligent life?

Scientists can and do excel in their fields without understanding how society works. But that doesn’t mean their ignorance of economics is harmless. The more admired they are as scientists — especially as popularizers of science — the more damage they can do when they speak authoritatively outside their fields. Their brilliance in one discipline can make them overconfident about their grasp of others. And increasingly, the questions facing the scientific community cross multiple specialties. It was the cross-disciplinary nature of Drake’s equation that Sagan saw as its great virtue.

The predictions of the astronomer looking for extraterrestrial socialists will be different from those of someone who expects the first signals of alien origin to come from a radically decentralized civilization — a society of private individuals who have discovered the sustainable harmony of self-interest and the general welfare.

After that first contact, after we’ve gained “insights on alien science and technology” and we get around to learning alien history, will we discover that their species has witnessed civilizations rise and fall? What was it that finally allowed them to break the cycle? How did they avoid stagnation, decline, and self-destruction?

How did they, as a culture, come to accept the economic way of thinking, embrace the philosophy of freedom, and develop a sustainable civilization capable of reaching out to us, the denizens of a less developed world?


B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

The Gig Economy Makes Karl Marx’s Dreams Come True And It’s All Capitalism’s Doing by Max Borders

When Joe Average steps out of his car after completing his shift for Lyft, he does so on his own terms. Nobody tells him when to start. Nobody tells him when to stop. The siren song that is prime time pricing might have coaxed him off the couch, but ultimately it was his call. And with the rest of his day, he’s going to go fishing. You see, Joe loves to fish — even more than he loves making money. After dinner, he might take some time to criticize the second season of True Detective.

Would ole Karl Marx have been happy with this result?

In The German Ideology, Marx wrote,

For as soon as the distribution of labour comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a herdsman, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.

Marx should be delighted — oh, except that it’s capitalism, not communism, that’s allowing Joe to be a fisherman and a critic on his own terms.

The sharing or “gig” economy is not only disrupting the way people live and work; it’s dividing the left considerably.

On the one hand, you have the nostalgic leftists who want Joe to work a nine-to-five job and skip the fishing. You know, like people did in the 1950s. As Freeman columnist Steve Horwitz writes, presidential candidate Hillary Clinton

longs for a time like the 1950s when workers had the structure of the corporate world and unions through which to lobby and negotiate for pay and benefits, rather than the so-called “gig” economy of so many modern freelance employees, such as Uber drivers. “This on-demand or so-called gig economy is creating exciting opportunities and unleashing innovation,” Clinton said, “but it’s also raising hard questions about workplace protection and what a good job will look like in the future.”

Joe already told us what a good job looks like. It’s one that lets him spend time fishing and criticizing.

More confusing (or confused, perhaps) is Paul Mason’s writing in the Guardian. He lauds “postcapitalism,” which has all the hallmarks of a society Clinton is worried about:

Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages.

Bingo. The gig economy. But does it make sense to give capitalism a different name? I suppose one could. After all, Marx coined the term. But Marx’s definition of capitalism is a system based on private ownership of the means of production. Has that dynamic fundamentally changed?

Far from it. The sharing economy is simply decentralizing power by allowing ordinary people to use their own small-scale means of production. By solving coordination problems and lowering transaction costs, technology is augmenting capitalism.

When Joe drives for Lyft, for example, his car is still his car. And now more of his time is his, too. Capitalism, even as Marx defined it, hasn’t fundamentally changed. But the use of technology to awaken sleeping private capital is allowing the system to evolve — and rather nicely if you’re Joe Average, or one of thousands of other workers like him.

Now, I’m not saying that there is nothing interesting going on in the electronic commons. Ideas are being configured and reconfigured in the networked economy. Many of those ideas are being taken out of the intellectual-property regime, thanks to open sourcing, and this can be a good thing. There are fierce debates about whether intellectual property (claims to property in ideas and in nonscarce goods) is justifiable. But passing over those debates, more and more open-source technologies are coming online for exploitation by everyone.

Do open sourcing and the creative commons take us to postcapitalism?

I don’t know. But fundamentally, as long as the process is voluntary and carried out peacefully by a community of cooperators, who cares what you call it? Should we be upset that the guy who founded Lyft is getting rich from the tech? Some people are, because they see the accumulation of wealth as taboo. But Joe’s life is better than it would have been in the absence of Lyft. The company allows him to live more of the life he wants to live.

As long as Joe Average is happier, who cares what Hillary Clinton thinks?


Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

Clinton’s Startup Tax Will Crush New Businesses by Dan Gelernter

Hillary Clinton has announced that she will, if elected, raise the capital-gains tax to a maximum that equals the highest income tax bracket. She hopes to promote long-term investments by penalizing short-term ones with a tax rate that gets lower the longer an investment is held, reaching the current 20% rate only after six years.

This, Ms. Clinton says, would allow a CEO to focus on the company’s true interests rather than just making the next quarter. It is, unfortunately, exactly the sort of plan you would expect from someone who has never started a company — and who doesn’t seem to know anyone who has.

The CEO of a startup is unlike the CEO of an established business. He is not the head of a chain of command: he is the spokesman or agent of a few colleagues, entrusted for the moment to represent them. The startup CEO has one primary job, which is raising money. It is the hardest thing a young company has to do — and it is an unending process.

Most germinal startups never raise any money at all. The ones that get seed funding are already breathing rarified air, and can afford perhaps a day of celebration before they start pursuing the next round.

The picture is especially tough for tech startups. A startup that builds software doesn’t have any machinery or physical supplies to auction off if the company fails. This means that banks won’t make the kind of secured business loans of the sort small companies traditionally get.

As a result, tech startups are wholly reliant on a relatively small number of investors who are looking for something more exciting than the establishment choices and are willing to take a big gamble in the hope of a big, short-term payoff. Though Ms. Clinton’s proposal would only affect those in the top income bracket, she may be surprised to learn that those are the only people who can afford to make such investments.

Professional investors think in terms of risk: they balance the likelihood of a startup’s failure against the potential payoff of its success. Increasing the tax rate reduces the effective payoff, which increases risk. Investors can lower that risk by reducing the valuation at which they are willing to invest, which means they take a larger share of the company — a straightforward transfer of risk from investors to entrepreneurs.

Ms. Clinton’s tax therefore will not be borne by wealthy investors: it comes out of the entrepreneur’s payday. The increased tax rate means a risk-equivalent decrease in the percentage of the company the entrepreneur gets to keep. And that’s just the best-case scenario.

The other option is that the tax doesn’t get paid at all, because the investor decides the increased risk isn’t worth it — the startup can’t attract funding and dies.

That sounds melodramatic, but it is no exaggeration. A startup company never has more offers than it needs; it never raises money with time spare. Even a slight change in the risk-return balance — say, the 3.8% which Obamacare quietly laid on top of the current capital-gains — kills companies, as investors and entrepreneurs see the potential upside finally shaved past the tipping point.

A tech startup has short-term potential. That is a major part of the attraction to investors, and that makes Ms. Clinton’s proposal especially damaging. In the tech world, we all hope we’ll be the next Facebook or Twitter, but you can’t pitch that to an investor. A good tech startup takes a small, simple idea and implements it beautifully.

The most direct success scenario is an acquisition by a larger company. In the app world — and this is the upside to not having physical limitations on distribution — the timescale is remarkably accelerated. A recent benchmark example was Mailbox, purchased by Dropbox just two months after it launched.

Giving investors an incentive to not to sell will hurt entrepreneurs yet again, postponing the day their sweat equity finally has tangible value, and encouraging decisions that make tax-sense rather than business-sense.

If Hillary Clinton really wants to help entrepreneurs, she should talk to some and find out what they actually want. A lower capital-gains tax — or no capital-gains tax — would be an excellent start.

Dan Gelernter

Dan Gelernter is CEO of the technology startup Dittach.

Who Is Doing More for Affordable Education: Politicians or Innovators? by Bryan Jinks

With a current outstanding student loan debt of $1.3 trillion, debt-free education is poised to be a major issue leading up to the 2016 presidential election.

Presidential candidate Bernie Sanders has come forth with his plan for tuition-free higher education.

Senator Elizabeth Warren supports debt-free education, which goes even further by guaranteeing that students don’t take on debt to pay other expenses incurred while receiving an education.

Democratic Party front-runner Hillary Clinton is expected to propose a plan to reduce student loan debt at some point. And don’t forget President Obama’s proposal to provide two years of community college to all students tuition-free.

While all of these plans would certainly increase access to higher education, they would also be expensive. President Obama’s relatively modest community college plan would cost $60 billion over the next decade. What makes this an even worse idea is that all of that taxpayer money wouldn’t solve the most important problems currently facing higher education.

Shifting the costs completely to taxpayers doesn’t actually reduce the costs. It also doesn’t increase the quality of education in a system that has high drop-out rates and where a lot of graduates end up in low-paying jobs that don’t use their degree. Among first-time college students who enrolled in a community college in the fall of 2008, fewer than 40% earned a credential from either a two-year or four-year institution within six years.

Whatever the other social or spiritual benefits of attending college are, they don’t justify wasting that so much time and money without seeing much improvement in wages or job prospects.

Proponents of debt-free college argue that these programs are worth the cost because a more educated workforce will boost the economy. But these programs would push more marginal students into college without any regard for how prepared they are, how likely they are to graduate, or how interested they are in getting a degree. If even more of these students enter college, keeping the low completion rates from falling even further would be a challenge.

All of these plans would just make sure that everyone would have access to the mediocre product that higher education currently is. Just as the purpose of Obamacare was to make sure that every American had a health insurance card in their wallet, the purpose of debt-free education is to make sure that every American has a student ID card too — whether it means anything or not.

But there are changes coming in higher education that can actually solve some of these problems.

The Internet is making education much cheaper. While Open Online Courses have existed for more than a decade, there are a growing number of places to find educational materials online. Udemy is an online marketplace that allows anyone to create their own course and sell it or give it away. Saylor Academy and University of the People both have online models that offer college credit with free tuition and relatively low examination fees.

Udacity offers nanodegrees that can be completed in 6-12 months. The online curriculum is made in partnership with technology companies to give students exactly the skills that hiring managers are looking for. And there are many more businesses and non-profits offering new ways to learn that are cheaper, faster, and more able to keep up with the ever-changing economy than traditional universities.

All of these innovations are happening in response the rising costs and poor outcomes that have become typical of formal education. New educational models will keep developing that offer solutions that policy makers can’t provide.

Some of these options are free, some aren’t. Each has their own curriculum and some provide more tangible credentials than others. There isn’t one definitive answer as to how someone should go about receiving an education. But each of these innovations provides a small part of the answer to the current problems with higher education.

Change for the better is coming to higher education. Just don’t expect it to come from Washington.

Bryan Jinks

Bryan Jinks is a ?freelance writer based out of Cleveland, Ohio.

Government Ruins the Dishwasher (Again) by Jeffrey A. Tucker

The regulatory assault on the dishwasher dates back at least a decade. For the most part, industry has gone along, perhaps grudgingly but also with a confidence that dishwashers would survive. Surely government rules wouldn’t finally make them useless.

But the latest regulatory push by the Department of Energy might have finally gone too far. The DoE says that loads of dishes can’t use more than 3.1 gallons. This amounts to a further intensification of “green” policies that are really just strategies to wreck the consumer experience.

The agency estimated that this would “save” 240 billion gallons of water over three decades. It would reduce energy consumption by 12 percent. It would save consumers $2 billion in utility bills.

But as with all such estimates, these projections have three critical problems.

First, saving money and resources is not always an absolute blessing if you have to give up the service for which the resources are used. Giving up indoor plumbing would certainly save water, just as banning the light bulb would save electricity. The purpose of resources is to use them to make our lives better.

Second, the price system is a far better guide to rational resource use than bureaucratic diktat. If the supply of water or electricity contracts, prices go up and consumers can make their own choices about how to respond. This is true with one proviso: There has to be a functioning market. This is not always true with public utilities.

Third, the bureaucrats rarely consider the possibility that people will respond to rationing by using resources in a different way. A low-flow toilet causes people to flush two and three times, a low-flow showerhead prompts people to take longer showers, and so on, with the end result of even more resource use.

What does breaking the dishwasher accomplish? It drives us back to filling sinks or just running water over dishes for 10 minutes until they are all clean, resulting in vastly more water use.

The Association of Home Appliance Manufacturers, which has quietly gone along with this nonsense all these years, has finally said no.

“At some point, they’re trying to squeeze blood from a stone that just doesn’t have any blood left in it,” said Rob McAver, the lead lobbyist.

The Association demonstrated to the regulators that the new standards do not clean the dishes. They further pointed out that this can only lead to more hand washing. The DoE now says it is revisiting the new standards to find a better solution.

All of this is rather preposterous, since dishwashers are already performing at a far lower level than they did decades ago. Even when I was growing up, they were getting better, not worse. You could put dirty dishes in, even with stuck-on egg and noodles, and they would come out perfectly clean.

I started noticing the change about five years ago. It was like one day to the next that the dishes started coming out with a gross-me-out film on the glasses. I thought it was my machine. So I bought a new one. The new one was even worse, and it broken within a year. Little by little, I started hand washing dishes first, just to make sure they are clean.

It turns out that this was happening all over the country. NPR actually discerned this trend and did a story about it. The actual source of the problem was not the machine or the user, but something that everyone had taken for granted for generations: the soap itself.

The issue here is phosphorous. The role of phosphorus in soap is critically important. It is not a cleaning agent itself but a natural chemical that unsticks the soap from fabrics and surfaces generally. You can easily see how this works by adding phosphorus to a sink full of suds. It attacks the soap and causes it to bundle up in tighter and heavier units, taking oil and dirt with it and pulling it down the drain. It is the thing that extracts the soap, making sure that it leaves surfaces.

Painters know that they absolutely must use phosphorous to prepare surfaces for painting. If they do not, they will be painting on a dirty, oily surface. This is why the only phosphorus you can now find at the hardware store is in the paint department (sold as Trisodium Phosphate). Otherwise, it is gone from all detergents that you use on clothes and dishes, which is a major reason why both fabrics and dishes are no longer as clean as they once were.

Why the war on phosphorous? It is also a fertilizer. When too much of it is dumped into rivers and lakes, algae growth takes over and kills off fish. The bulk of this comes from large-scale industrial farms in specific locations around the country. Regulators, however, took on the easy target of domestic soaps, and manufacturers faced pressure to remove it from their soaps.

Now it is impossible to get laundry or dish soap with phosphorous as part of the mix. If you want clean, you have to physically add your own by purchasing trisodium phosphate in the paint department and adding it to the mixture by hand.

Welcome to regulated America, where once fabulous consumer inventions like refrigerators, freezers, washing machines, and dishwashers have been reduced to a barely functioning state. The reasons are always the same: 1) phosphorous-free detergent, 2) a fetish with saving water, 3) weaker motors that use less electricity, 4) more tepid water due to low default settings on hot water heaters, and 5) reduced water pressure in general.

Put it all together and you have an array of products that no longer function in ways that make our lives better. There is an element of dystopia about this, especially given that these household appliances were first invented and widely deployed in postwar America. This was the country where women, in particular, first started to enjoy the “freedom from drudgery.” It was machines as much as ideology that began to enable women to cultivate professional lives outside the home.

No, we are not going to be forced back to washboards by the river anytime soon. But suddenly, the prospect of having to hand wash our dishes does indeed seem real. If the regulators really do get their way, functioning dishwashers could become like high-flow toilets: contraband to be snuck across borders and sold at a high black market prices.

It seems that the regulators can’t think of much to do these days besides ruining things we love.


Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World. Follow on Twitter and Like on Facebook.

Is Politics Obsolete? How People Outpace Politicians by Max Borders and Jeffrey A. Tucker

Hillary Clinton talks of cracking down on the gig economy. Donald Trump speaks of telling American corporations where they can and can’t do business abroad. Bernie Sanders says we have too many deodorant choices. They all speak about immigrants as if it were 1863.

What the heck are these people talking about?

More and more, that’s the response many people have to the current-day political speeches and rhetoric. It’s a hotly contested election, somewhat like 2008, but this time around, public engagement is low, reports Pew.

That’s no surprise, really. Whether it’s the leftists, the rightists, or everyone in between, all of these politicians seem to be blathering about a world gone by — one that has little to do with the 21st century. If they’re not tapping into people’s baser instincts of fear and nativism, they’re dusting off 20th-century talking points about creating “good jobs.”

Maybe there was a time when the political culture seemed to keep up with the pace of innovation. If so, those times are long gone. The rhetoric of electoral politics is exposing the great rift in civic life.

The tools we use every day, the technologies we love, the way we engage each other, the means by which our lives are improving are a consequences of innovation, markets, community, and globalization — that is, by the interactions of free people. Not by politics. And not by the systems politics creates.

The political election is a tired old ritual in which we send our hopes and dreams away to distant capitals. Why do we outsource them to politicians, lobbyists, and bureaucrats: people who are trapped in a system that rewards the worst in people? What’s left of governance is logrolling, spectacle, and unwanted interference in the lives of everyone else.

Politicians seem more concerned with putting the genie of innovation and entrepreneurship back in the bottle than doing anything meaningful. After the election, we try our best to ignore them and get on with life.

Politicians seem more concerned with putting the genie of innovation and entrepreneurship back in the bottle than doing anything meaningful.

In 2012, US voters reelected Barack Obama, and now we’re gearing up to elect someone else. Candidates will talk about their visions and their wonderful plans for the country. But in the last three years, virtually none of the incredible, beautiful upheaval we’ve seen has had anything to do with the presidency or with anyone politician’s plans.

In fact, when you think about what government has done for us in recent years, only one new program comes to mind: Obamacare. Opinions vary on whether that program has been deeply disappointing or an unmitigated disaster.

Now, take a step back and observe the evolution of commercial society and how it is bringing us unprecedented bounty. The digital sector of emergent, market-generated, people-driven, technology-fueled innovation is fulfilling human aspirations and spreading useful services to people in all walks of life. National borders seem ever more arbitrary. Surprises await us around every corner. Our political systems can claim credit for none of it.

And yet, we are once again being asked to turn to politicians to drive progress.

Consider how much our lives and technologies have changed since the last presidential election. Smartphone ownership has gone from 300 million to 2 billion, meaning that most of the population of the developed world — and large parts of the rest — now have access to a wireless supercomputer in their pockets. As a result, we are more in touch than ever.

There are now dozens of ways for anyone to keep in contact with anyone else through text messaging and video, and most of the services are free. Transportation in cities has fundamentally changed due to ridesharing and app-based systems that are outcompeting municipal taxis. Traditional travel lodging has been disrupted through mobile applications that turn every empty room into a hotel, and finding permanent lodging is easier than ever. You can find the ratings for any service or establishment instantly with a click or a tap, long before you purchase. You can feasibly shop for and buy a house without ever having stepped inside of it.

Cryptocurrency is becoming a viable alternative to national monies, and payment systems on distributed networks are being customized for peer-to-peer exchanges of property titles.

The mass distribution and availability of mobile applications with maps means that you are never lost, and, moreover, that you can be intensely aware of everything around you, wherever you are or wherever you are planning to be. Extended families that are spread out over large geographic regions can stay constantly in touch, chatting and playing games.

The way we help our neighbors and communities is improving. We can contribute to charitable causes with just a click. We are closer to our neighbors and their needs — whether it’s a missing cat, a call for a handyman, or childcare for Saturday night. We can be on the lookout after a break-in and share video of the perpetrators instantly.

The way we consume music has fundamentally changed. We once bought CDs. Then we downloaded particular tracks and albums. With Internet everywhere, we now stream a seemingly endless variety of genres. The switch between classical and indie rock requires only a touch. And it’s not just new music we can access, but vast archives and recreations of music dating to antiquity. Instantly.

Software packages that once cost thousands are now low-cost downloadable apps. Many of us live in the cloud now, so that no one’s life is ruined by a computer crash. Lost hardware can be found with built-in tracers — even stealing computers is harder than ever.

Where we work no longer matters as much. 4G LTE means a powerful Internet connection wherever you are, and WiFi on airlines means staying in touch even while above the clouds. Online document signing means total portability and the end of the physical world for most business transactions. You can share almost anything — whether grocery lists or whole writing projects — with anyone and work in real time. More people than ever work from home because they can.

News is now crowdsourced through Twitter and Facebook — or through mostly silly sites like BuzzFeed. There are thousands of competitors, so that we can know what we want to know wherever we are. Once there was only “national news”; now a news event has to be pretty epic to qualify, and much of the news that we are interested in never even makes old-line newspapers.

Edward Snowden revealed ubiquitous surveillance, escaped prosecution, and now, thanks to technology, has been on a worldwide speaking tour, becoming the globe’s most famous public intellectual. This is despite his having been censored and effectively exiled by the world’s biggest and most powerful state. He has a great story to tell, and that story is more powerful than any of the big shots who want him to shut up.

Pot has been effectively legalized in many American cities, and the temperature on the war against it has dropped dramatically. When dispensaries are raided, the news flies all over the Internet within minutes, creating outrage and bringing the heat down on the one-time masters of the universe. There is now a political risk to participating in the war on pot — something unthinkable even 10 years ago. And as police continue to abuse their power, citizens are waiting with cameras.

Oil prices have collapsed, revealing the fallacy of peak oil. This happened despite pressure in the opposite direction from every special interest, from environmentalists to the oil industry itself. The reason was again technological. We discovered better and cheaper ways of drilling, and, in so doing, exposed vastly more resources than anyone thought accessible.

At the very time when oil and gas seemed untouchable, we suddenly saw electric cars becoming viable options. This was not due to government mandates — regulators tried those for years — but due to some serious innovation on the part of one remarkable company. It’s not even the subsidies, such as they are, that are making the difference; it’s the fine-tuning of the machine itself. Tesla even took it a step further and released its patents into the commons, allowing innovation to spread at a market-based pace.

We are now printing houses in one day, vaping instead of smoking, legally purchasing pharmaceuticals abroad, using drones to deliver consumer products, and enjoying one-day delivery of just about everything.

In the last four years, the ebook became a mass consumer item, outselling the physical book and readable on devices within the budget of just about everyone. And despite attempts to keep books offline, just about anything is now available for download, putting all the world’s great literature, in all major languages, at our fingertips.

Here we go again, playing “let’s pretend” and electing leaders under the old-fashioned presumption that it is politics that improves the world and drives history forward.

And speaking of languages, we now have instant access to translation programs that allow us to email and even text with anyone in a way he or she can understand regardless of language. It’s an awesome thing to consider that this final barrier to universal harmony, once seen as insuperable, is in the process of melting away.

These are all ways in which the world has been improved through markets, creativity, and free association. And yet, here we go again, playing “let’s pretend” and electing leaders under the old-fashioned presumption that it is politics that improves the world and drives history forward.

Look around: progress is everywhere. And it is not because we are electing the “right people.” Progress occurs despite politics and politicians, not because of them.

Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World. Follow on Twitter and Like on Facebook.

Paul Krugman Is Clueless about Bitcoin by Max Borders

In this video clip, Paul Krugman demonstrates once again that prizes don’t make you an expert on everything. Indeed, his poor prognostications happen so frequently that one wonders if Krugman is an expert on anything. I don’t say that to be unpleasant. If you’re going on TV and enjoying a lavish lifestyle by pretending to know what you’re talking about, shouldn’t you be held to a higher standard?

Let’s pass over for a moment how woefully wrong Krugman was about the Internet. What about the internet of money?

Krugman first says: “At this point bitcoin is not looking too good.”

It is true that investment often follows the Gartner hype cycle. So bitcoin has indeed fallen from great heights and is probably just now making its ascent out of the “trough of disillusionment.”

But so what? There is nothing inherently wrong with bitcoin. In fact, some very savvy, patient people are building an unbelievable set of technologies within and around the blockchain. And if you believe Gartner, most really interesting tech goes through this cycle.

Let’s look back at the Internet. When the dotcom bubble and subsequent burst looked like this:

Do we conclude that because in 2002 the Internet wasn’t “looking so good” that TCP/IP was not viable? That would have been a very short-sighted thing to say, particularly about a system that is a robust “dumb network“ like the internet.

Bitcoin is also a dumb network. But don’t let the “dumb” part fool you, says bitcoin expert Andreas Andronopoulos. “So the dumb network becomes a platform for independent innovation, without permission, at the edge. The result is an incredible range of innovations, carried out at an even more incredible pace. People interested in even the tiniest of niche applications can create them on the edge.”

Then Krugman goes on to ask, “Why does a piece of paper with a dead president on it have value?” Answering his own question he says “Because other people think it has value.”

And this is not untrue. But the problem with this line of thinking is — subjective value notwithstanding — the value of money is also contingent. You might say the value of fiat money is too contingent — especially upon political whims, upon the limited knowledge of the folks at the Federal Reserve, and upon the fact that its unit of account is no longer anything scarce, such as gold.

By contrast, bitcoin has standard of scarcity programmed into it. So, bitcoin is in limited supply, thanks to a sophisticated algorithm.

In a fully decentralized monetary system, there is no central authority that regulates the monetary base. Instead, currency is created by the nodes of a peer-to-peer network. The bitcoin generation algorithm defines, in advance, how currency can be created and at what rate. Any currency that is generated by a malicious user that does not follow the rules will be rejected by the network and thus is worthless. (To learn more about this algorithm, visit “Currency with a Finite Supply.”)

Perhaps you don’t trust this algorithm. Certainly Paul Krugman does not. That’s okay, because digital currencies compete, so you can find one you do trust. One crypto currency is backed by gold and funnily enough, it’s called “the Hayek” after the Nobel laureate who wrote about competing private currencies.

Now, what shall we make of the magic of the dollar? Krugman says it is “the fact that you can use it to pay taxes.” That’s sort of like saying that the Internet works because of eFile. Let’s just assume Krugman was kidding.

But Krugman thinks, without irony, that bitcoin “levitates.” That is to say, he’s okay with the idea that the dollar has value because other people value it, but he’s not okay with the idea that bitcoin has value because other people value it, which is a rather curious thing to say in the same two-minute stretch. He goes on to argue that bitcoin is built on libertarian ideology, and that it doesn’t do anything that digitizing the dollar hasn’t done.

And that’s when we realize that Krugman doesn’t have any earthly clue about bitcoin.

But Freeman columnist Andreas Antonopoulos does:

Open-source currencies have another layer that multiplies these underlying effects: the currency itself. Not only is the investment in infrastructure and innovation shared by all, but the shared benefit may also manifest in increased value for the common currency.

Currency is the quintessential shared good, because its value correlates strongly to the economic activity that it enables. In simple terms, a currency is valuable because many people use it, and the more who use it, the more valuable it becomes.

Unlike national currencies, which are generally restricted to use within a country’s borders, digital currencies like bitcoin are global and can therefore be readily adopted and used by almost any user who is part of the networked global society.

What Krugman also fails to appreciate is that bitcoin and the bitcoin network is disintermediated. That’s a fancy way of saying it’s direct and peer-to-peer. This elimination of the mediating institutions — banks, governments, and credit card companies — means bitcoin transactions are far, far cheaper. But that also means these institutions could be far less powerful over time. And that’s precisely why it’s being adopted most quickly by the world’s poorest people and countries with hyperinflation.

Hey, look, I understand. In many ways, Krugman is a twentieth-century mind. Keynesian. Unhealthy obsession with aggregates and dirigisme. He believes in big central solutions to problems that robust, decentralized systems are far better equipped to tackle. And he’s not terribly plugged into tech innovation. In fact, here’s that well-played Internet quote in case you forgot:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law” — which states that the number of potential connections in a network is proportional to the square of the number of participants — becomes apparent: most people have nothing to say to each other!

By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

To grok the power decentralization, you have to have a twenty-first century mind.

Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

Should We Fear the Era of Driverless Cars or Embrace the Coming Age of Autopilot? by Will Tippens

Driving kills more than 30,000 Americans every year. Wrecks cause billions of dollars in damages. The average commuter spends nearly 40 hours a year stuck in traffic and almost five years just driving in general.

But there is light at the end of the traffic-jammed tunnel: the driverless car. Thanks to millions of dollars in driverless technology investment by tech giants like Google and Tesla, the era of road rage, drunk driving, and wasted hours behind the wheel could be left in a cloud of dust within the next two decades.

Despite the immense potential of self-driving vehicles, commentators are already dourly warning that such automation will produce undesirable effects. As political blogger Scott Santens warns,

Driverless vehicles are coming, and they are coming fast…. As close as 2025 — that is in a mere 10 years — our advancing state of technology will begin disrupting our economy in ways we can’t even yet imagine. Human labor is increasingly unnecessary and even economically unviable compared to machine labor.

The problem, Santens says, is that there are “over 10 million American workers and their families whose incomes depend entirely or at least partially on the incomes of truck drivers.” These professional drivers will face unemployment within the next two decades due to self-driving vehicles.

Does this argument sound familiar?

These same objections have sprung up at every major stage of technological innovation since the Industrial Revolution, from the textile-working Luddites destroying looming machines in the 1810s to taxi drivers in 2015 smashing Uber cars.

Many assume that any initial job loss accompanying new technology harms the economy and further impoverishes the most vulnerable, whether fast food workers or truck drivers. It’s true that losing a job can be an individual hardship, but are these same pundits ready to denounce the creation of the light bulb as an economic scourge because it put the candle makers out of business?

Just as blacksmithing dwindled with the decline of the horse-drawn buggy, economic demand for certain jobs waxes and wanes. Jobs arise and continue to exist for the sole reason of satisfying consumer demands, and the consumer’s demands are continuously evolving. Once gas heating devices became available, most people decided that indoor fires were dirtier, costlier, and less effective at heating and cooking, so they switched. While the change temporarily disadvantaged those in the chimney-sweeping business, the added value of the gas stove vastly improved the quality of life for everyone, chimney sweeps included.

There were no auto mechanics before the automobile and no web designers before the Internet. It is impossible to predict all the new employment opportunities a technology will create beforehand. Countless jobs exist today that were unthinkable in 1995 — and 20 years from now, people will be employed in ways we cannot yet begin to imagine, with the driverless car as a key catalyst.

The historical perspective doesn’t assuage the naysayers. If some jobs can go extinct, couldn’t all jobs go extinct?

Yes, every job we now know could someday disappear — but so what? Specific jobs may come and go, but that doesn’t mean we will ever see a day when labor is no longer demanded.

Economist David Ricardo demonstrated in 1817 that each person has a comparative advantage due to different opportunity costs. Each person is useful, and no matter how unskilled he or she may be, there will always be something that each person has a special advantage in producing. When this diversity of ability and interest is coupled with the infinite creativity of freely acting individuals, new opportunities will always arise, no matter how far technology advances.

Neither jobs nor labor are ends in themselves — they are mere means to the goal of wealth production. This does not mean that every person is concerned only with getting rich, but as Henry Hazlitt wrote in Economics in One Lesson, real wealth consists in what is produced and consumed: the food we eat, the clothes we wear, the houses we live in. It is railways and roads and motor cars; ships and planes and factories; schools and churches and theaters; pianos, paintings and hooks.

In other words, wealth is the ability to fulfill subjective human desires, whether that means having fresh fruit at your local grocery or being able to easily get from point A to point B. Labor is simply a means to these ends. Technology, in turn, allows labor to become far more efficient, resulting in more wealth diffused throughout society.

Everyone knows that using a bulldozer to dig a ditch in an hour is preferable to having a whole team of workers spend all day digging it by hand. The “surplus” workers are now available to do something else in which they can produce more highly valued goods and services.  Over time, in an increasingly specialized economy, productivity rises and individuals are able to better serve one another through mutually beneficial exchanges in the market. This ongoing process of capital accumulation is the key to all meaningful prosperity and the reason all of humanity has seen an unprecedented rise in wealth, living standards, leisure, and health in the past two centuries.

Technology is always uncertain going forward. Aldous Huxley warned in 1927 that jukeboxes would put live artists out of business. Time magazine predicted the computer would wreak economic chaos in the 1960s.

Today, on the cusp of one of the biggest innovations since the Internet, there is, predictably, similar opposition. But those who wring their hands at the prospect of the driverless car fail to see that its greatest potential lies not in reducing pollution and road deaths, nor in lowering fuel costs and insurance rates, but rather in its ability to liberate billions of hours of human potential that truckers, taxi drivers, and commuters now devote to focusing on the road.

No one can know exactly what the future will look like, but we know where we have been, and we know the principles of human flourishing that have guided us here.

If society is a car, trade is the engine — and technology is the gas. It drives itself. Enjoy the ride.

Will Tippens

Will Tippens is a recent law school graduate living in Memphis.

RELATED ARTICLES:

The Roads of the Future Are Made of Plastic

Apple co-founder: Robots to own people as their pets – English Pravda.RU

The Ghosts of Spying Past by Gary McGath

In the 1990s, the Clinton administration fought furiously against privacy and security in communication, and we’re still hurting from it today. Yet people in powerful positions are trying to commit the same mistakes all over again.

In the early days, the Internet was thoroughly insecure; its governmental and academic users trusted each other, and the occasional student prank couldn’t cause much damage. As it started becoming available to everyone in the early ‘90s, people saw the huge opportunities it offered for commerce.

But doing business safely requires data security: If unauthorized parties can grab credit card numbers or issue fake orders, nobody is safe. However, the Clinton administration considered communication security a threat to national security.

Attorney General Janet Reno said, “Without encryption safeguards, all Americans will be endangered.” She didn’t mean that we needed the safeguard of encryption, but that we had to be protected from encryption.

In a 1996 executive order, President Clinton stated:

I have determined that the export of encryption products described in this section could harm national security and foreign policy interests even where comparable products are or appear to be available from sources outside the United States, and that facts and questions concerning the foreign availability of such encryption products cannot be made subject to public disclosure or judicial review without revealing or implicating classified information that could harm United States national security and foreign policy interests.

The government prohibited the export of strongly secure encryption technology by calling it a “munition.” Putting code on the Internet makes it available around the world, so the restriction crippled secure communication. The Department of Justice investigated Phil Zimmerman for three years for making a free email encryption program, PGP, available.

The administration also tried to mandate government access to all strong encryption keys. In 1993 it proposed making the Clipper Chip, with a built-in “back door” for government spying, the standard for serious encryption. Any message it sent included a 128-bit field that would let government agencies (and hopefully no one else) decrypt it.

But the algorithm for the Clipper was classified, making independent assessments impossible. However strong it was, it would have offered a single point to attack, with the opportunity to intercept virtually unlimited amounts of data as an incentive to find weaknesses. Security experts pointed out the inherent risks inherent in the key recovery process.

By the end of the ‘90s, the government had apparently yielded to public pressure and common sense and lifted the worst of the restrictions. It didn’t give up, though — it just got sneakier.

Documents revealed by Edward Snowden show that the NSA embarked on a program to install back doors through secret collaboration with businesses. It sought, in its own words, to “insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices” and “shape the worldwide cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA/CSS.”

The NSA isn’t just a spy agency; it’s one of the leading centers of expertise in encryption, perhaps the best in the world. Businesses and other organizations trying to maximize their data security trust its technical recommendations — or at least they used to. If it can’t get the willing collaboration of tech companies, it can deceive them with broken standards.

Old software with government-required weaknesses from the nineties is still around, along with newer software that may have NSA-inspired weaknesses. There are still restrictions on the exporting of cryptography in many cases, depending on a complicated set of criteria related to the software’s purpose. Even harmless file identification software, used mostly by librarians, may have to carry a warning that it contains decryption code and might be subject to use restrictions.

With today’s vastly more powerful computers, encryption that was strong two decades ago can be easily broken today. Some websites, especially ones outside the United States that were denied access to strong encryption, still use the methods which they were stuck with then, and so do some old browsers.

To deal with this, many browsers support the old protocols when a site offers nothing stronger, and many sites fall back to the weak protocols if a browser is limited to them. Code breakers have found ways to make browsers think only weak security is available and force even the stronger sites to fall back on it. Some sites have disabled weak encryption, only to be forced to restore it because so many users have old browsers.

You’d think that by now people would understand that secure transactions are essential, but politicians in the US and other countries still want to weaken encryption so they can spy on people’s communications.

The FBI’s assistant director of counter-terrorism claims that strong encryption gives terrorists “a free zone by which to radicalize, plot, and plan.” NSA Director Michael S. Rogers has said, “I don’t want a back door. I want a front door.” UK Prime Minister Cameron says,

In extremis, it has been possible to read someone’s letter, to listen to someone’s call, to mobile communications. The question remains: are we going to allow a means of communications where it simply is not possible to do that? My answer to that question is: no, we must not.

In 2015 over eighty civil society organizations, companies, and trade associations, including Apple, Microsoft, Google, and Adobe, sent a public letter to President Obama expressing concern about such actions. The letter states:

Strong encryption is the cornerstone of the modern information economy’s security. Encryption protects billions of people every day against countless threats — be they street criminals trying to steal our phones and laptops, computer criminals trying to defraud us, corporate spies trying to obtain our companies’ most valuable trade secrets, repressive governments trying to stifle dissent, or foreign intelligence agencies trying to compromise our and our allies’ most sensitive national security secrets.

In the United States, we have a tradition of free speech, but in many countries, even mild criticism of the authorities needs to travel in secret.

A country can pass laws to weaken its law-abiding citizens’ access to cryptography, but criminals and terrorists exchanging secret messages would have no reason to pay attention to them. They can keep using the strong encryption methods that are currently available and get new software from countries that don’t have those restrictions.

Governments would gain increased ability to spy on people who follow the law, and so would free-lance data thieves, while competent criminals would still be able to communicate in secret. To crib David Cameron, we must not let that happen — again.

Gary McGath

Gary McGath is a freelance software engineer living in Nashua, New Hampshire.

RELATED ARTICLES:

Encryption stalemate: A never-ending saga?

Why Cameron’s encryption limitations will go nowhere

The dynamic Internet marketplace at work: Consumer demand is driving Google and Yahoo encryption efforts

Celebrate Independence With a Revolution Against the Surveillance State by Ryan Hagemann

In the decade before 1776, British courts began issuing “writs of assistance” for the general search and seizure of colonists’ documents. The intention was to permit British troops to inspect properties for smuggled goods, but these writs gave officials broad power to enter private homes to search for, and seize, anything and everything that might be considered contraband by the British Empire.

Such general warrants were among the many complaints the colonists levied against the crown and played no small part in the American Revolution.

This Independence Day, it would behoove us all, as Americans, to reflect on the motivations for the colonists’ revolt against Britain. In a 2013 piece at the Huffington Post, Radley Balko spoke on the core meaning of the Fourth of July:

Independence Day isn’t for celebrating the American government and whoever happens to be currently running it, but for celebrating the principles that make America unique.

And in fact, celebrating the principles that [animated] the American founding often means celebrating the figures who have defended those principles in spite of the government.

The list of modern Americans who have stood as stalwart guardians of the principles of liberty is regrettably short. More concerning, however, is what has happened in the years since 9/11, as fear and paranoia over terrorism gripped the American electorate and absconded with many of the basic liberties that the founding generation fought and died to uphold. America just isn’t what it used to be.

But the tides of unrestrained surveillance seem to be receding.

A few weeks ago, thanks to a vibrant and broad coalition of civil libertarians, grassroots organizations, and cross-aisle partners, America finally took the first step in reining in the secret surveillance state that Edward Snowden revealed to us almost two years ago to the day. The USA FREEDOM Act, for all its flaws, stands as the most significant piece of surveillance reform legislation since 1978 and signals Congress’s willingness to work on surveillance reform.

While there is much to do in preparing for upcoming battles over government surveillance, a look back at recent events can help shed light on how we as libertarians can best move forward.

Not surprisingly, the debate left some dissatisfied that the reforms did not go far enough, while others considered anything short of a full USA PATRIOT Act reauthorization to be an unacceptable compromise.

Filled with riotous rhetorical broadsides, the debate featured civil libertarians supporting reform against civil libertarians backing a complete, uncompromising end to the surveillance state, pitting Republican hawks against centrists and Democrats, and Sen. Rand Paul against pretty much everyone.

In a story of strange political bedfellows, Sen. Paul joined hawks such as Sen. John McCain and Sen. Richard Burr in voting against the USA FREEDOM Act. While Paul criticized components of the bill for not going far enough (all criticisms being perfectly fair and true), the political reality was such that this bill, however imperfect, was by far the best chance for reform in the near term.

As Cato’s Julian Sanchez noted prior to its passage: “While ‘Sunset the Patriot Act’ makes for an appealing slogan, the fact remains that the vast majority of the Patriot Act is permanent — and includes an array of overlapping authorities that will limit the effect of an expiration.”

In other words, the limitations of USA FREEDOM would actually be more effective than simply letting a two or three provisions of the USA PATRIOT Act (temporarily) expire.

The heroes of this debate were a broad coalition of civil-society groups, technology firms, and nonprofits dedicated to moving the ball forward on reform, no matter how small the gain.

However, even as some are celebrating this small but important victory, there are troubled waters ahead for privacy advocates and civil libertarians. The upcoming Senate vote on the Cybersecurity and Information Sharing Act (CISA) is the next battle in the ongoing war against the surveillance apparatus. If passed, it would be one step forward, two steps back for the small victories privacy advocates have won over the past month.

I’ve written quite a bit on the issues that many civil libertarian organizations have with CISA, which is little more than a surveillance Trojan Horse containing a host of “information-sharing” provisions that would allow intelligence agencies to acquire information from private firms and use it to prosecute Americans for garden-variety crimes unrelated to cybersecurity, due process be damned.

A broad coalition of organizations has once more come together, this time to oppose CISA, to continue the battle against expanding the surveillance state.

In public policy, the Overton window refers to the spectrum of policy prescriptions and ideas that the public views as tolerable: the political viability of any idea depends not on the personal preferences of politicians, but on whether it falls within the range of publicly acceptable options.

That is why a willingness to compromise is so vital in public-policy discussions. Marginal reforms should be seen as victories in the slow but consistent effort to rein in the excesses of our Orwellian security order.

USA FREEDOM is far from ideal, and the expiration of provisions of the PATRIOT Act, such as Section 215, will not stop government surveillance in its tracks. The government can still use National Security Letters (NSL), and Section 702 of the FISA Amendments Act can still be creatively interpreted by the intelligence community to justify continued mass surveillance, to say nothing of Executive Order 12333, which covers surveillance conducted outside of the United States.

Nonetheless, the new law is an important first step towards tearing down the most onerous provisions of the PATRIOT Act in a piecemeal fashion. This may seem a daunting and less-than-ideal approach for many libertarians, but the alternative is merely symbolic gesticulation.

So where do we go from here?

Libertarians need to start working with nontraditional allies to support, on an issue-by-issue basis, real, practical reforms to the surveillance state. If we do not, we cannot hope to be effective and valuable partners to those individuals and organizations working tirelessly in support of the same values and freedoms that we all hold dear.

We must also recognize that there are limitations to compromise, and we should never forsake our core principles in favor of political expediency. But, on the margins, we can make significant contributions to civil liberties, especially in the ongoing surveillance reform debate. Recognizing the reality of what is achievable in the current political landscape is necessary for identifying and taking advantage of the available opportunities for restoring liberty.

We have a choice in the upcoming surveillance-reform fights: We can be positive contributors to a legacy of liberty for future generations, or we can continue to fancy ourselves armchair philosophers, ignoring public-policy realities and taking comfort in the echo chamber that never challenges our worldview.

Given political realities, marginal reforms constitute the fastest path forward. The American people are owed their civil liberties; hence, we must fight to move, however incrementally, towards a freer, more civil society.


Ryan Hagemann

Ryan Hagemann is a civil liberties policy analyst at the Niskanen Center.

RELATED ARTICLE: Cyber Security: Where are we now and where are we headed?