Tag Archive for: Competition

California Government Puts Uber on Blocks by Jeffrey A. Tucker

The California Labor Commission, with its expansive power to categorize and codify what it is that workers do, has dealt a terrible blow to Uber, the disruptive ride-sharing service. In one administrative edict, it has managed to do what hundreds of local governments haven’t.

Every rapacious municipal taxi monopoly in the state has to be celebrating today. It also provides a model for how these companies will be treated at the federal level. This could be a crushing blow. It’s not only the fate of Uber that is at stake. The entire peer-to-peer economy could be damaged by these administrative edicts.

The change in how the income of Uber drivers is treated by the law seems innocuous. Instead of being regarded as “independent contractors,” they are now to be regarded as “employees.”

Why does it matter? You find out only way down in the New York Times story on the issue. This “could change Uber’s cost structure, requiring it to offer health insurance and other benefits, as well as paying salaries.”

That’s just the start of it. Suddenly, Uber drivers will be subject to a huge range of federal tax laws that involve withholding, maximum working hours, and the entire labor code at all levels as it affects the market for employees. Oh, and Obamacare.

This is a devastating turn for the company and those who drive for it.

Just ask the drivers:

Indeed, there seems to be no justification for calling Uber drivers employees. I can recall being picked up at airport once. Uber was not allowed to serve that airport. I asked the man if he worked for Uber. He said he used to but not anymore.

“When did you quit?”

“Just now,” he said. Wink, wink. He was driving for himself on my trip.

“When do you think you will work for Uber again?”

“After I drop you off.”

That’s exactly the kind of independence that Uber drivers value. They don’t have to answer any particular call that comes in. They set their own hours. They drive their own cars. When an airport bans Uber, they simply redefine themselves.

They can do this because they are their own boss; Uber only cuts them off if they don’t answer a call on their mobile apps for 180 days. But it is precisely that rule that led the commission to call them “employees.”

That’s a pretty thin basis on which to call someone an employee. And it’s also solid proof that the point of this decision is not to clarify some labor designation but rather to shore up the old monopolies that want to continue to rip off consumers with high prices and poor service. No surprise, government here is using its power to serve the ruling class and established interests.

This is exactly the problem with government regulations that purport to define and codify every job. Such regulations tend to restrict the types and speed of innovation that can occur in enterprises.

The app economy and peer-to-peer network are huge growth areas precisely because they have so far manage to evade being codified and controlled and shoe-horned into the old stultifying rules.

If everyone earning a piecemeal stream of income is called an employee — and regulated by relevant tax, workplace, and labor laws — many of these companies immediately become unviable.

There will be no more on-demand hair stylists, plumbers, tennis coaches, and piano teachers. The fate of a vast number of companies is at stake. The future is at stake.

For now, Uber is saying that this decision pertains to this one employee only. I hope that this claim is sustainable. If it is not, the regulators will use this decision to inflict a terrible blow on the brightest and fastest growing sector of American economic life.


Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.

AMC’s “Halt and Catch Fire” Is Capitalism’s Finest Hour by Keith Farrell

AMC’s Halt and Catch Fire is a brilliant achievement. The show is a vibrant look at the emerging personal computer industry in the early 1980s. But more than that, the show is about capitalism, creative destruction, and innovation.

While we all know the PC industry changed the world, the visionaries and creators who brought us into the information age faced uncertainty over what their efforts would yield. They risked everything to build new machines and to create shaky start-ups. Often they failed and lost all they had.

HCF has four main characters: Joe, a visionary and salesman; Cameron, an eccentric programming geek; Gordon, a misunderstood engineering genius; and Gordon’s wife, Donna, a brilliant but unappreciated housewife and engineer.

The show pits programmers, hardware engineers, investors, big businesses, corporate lawyers, venture capitalists, and competing start-ups against each other and, at times, shows them having to cooperate to overcome mutual problems. The result is innovation.

Lee Pace gives an award-worthy performance as Joe MacMillan. The son of a never-present IBM tycoon and a negligent, drug addicted mother, Joe struggles with a host of mental and emotional problems. He’s a man with a brilliant mind and an amazing vision — but he has no computer knowledge or capabilities.

The series begins with his leaving a sales job at IBM in the hope of hijacking Cardiff Electric, a small Texas-based computer company, and launching it into the personal computing game.

As part of his scheme, he gets a low-level job at Cardiff where he recruits Gordon Clark, played by the equally talented Scoot McNairy. Enamored with Gordon’s prior writings on the potential for widespread personal computer use, Joe pleads with Gordon to reverse engineer an IBM-PC with him. The plot delves into the ethical ambiguities of intellectual property law as the two spend days reverse engineering the IBM BIOS.

While the show is fiction, it is inspired in part by the real-life events of Rod Canion, co-founder of Compaq. His book, Open: How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing serves as a basis for many of the events in the show’s first season.

In 1981, when Canion and his cohorts set out to make a portable PC, the market was dominated by IBM. Because IBM had rushed their IBM-PC to market, the system was made up entirely of off-the-shelf components and other companies’ software.

As a result, it was possible to buy those same components and software and build what was known as an IBM “clone.” But these clones were only mostlycompatible with IBM. While they could run DOS, they may or may not have run other programs written for IBM-PCs.

Because IBM dominated the market, all the best software was being written for IBMs. Canion wanted to build a computer that was 100 percent IBM compatible but cheaper — and portable enough to move from desk to desk.

Canion said in an interview on the Internet History Podcast, “We didn’t want to copy their computer! We wanted to have access to the software that was written for their computer by other people.”

But in order to do that, he and his team had to reverse-engineer the IBM BIOS. They couldn’t just steal or copy the code because it was proprietary technology, but they could figure out what function the code executed and then write their own code to handle the same task.

Canion explains:

What our lawyers told us was that not only can you not use [the copyrighted code], anybody that’s even looked at it — glanced at it — could taint the whole project. … We had two software people. One guy read the code and generated the functional specifications.

So it was like reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function.

In Halt and Catch Fire, Joe uses this idea to push Cardiff into making their own PC by intentionally leaking to IBM that he and Gordon had indeed reversed engineered the BIOS. They recruit a young punk-rock programmer named Cameron Howe to write their own BIOS.

While Gordon, Cameron, and Joe all believe that they are the central piece of the plan, the truth is that they all need each other. They also need to get the bosses and investors at Cardiff on their side in order to succeed, which is hard to do after infuriating them. The show demonstrates that for an enterprise to succeed you need to have cooperation between people of varying skill sets and knowledge bases — and between capital and labor.

The series is an exploration of the chaos and creative destruction that goes into the process of innovation. The beginning of the first episode explains the show’s title:

HALT AND CATCH FIRE (HCF): An early computer command that sent the machine into a race condition, forcing all instructions to compete for superiority at once. Control of the computer could be regained.

The show takes this theme of racing for superiority to several levels: the characters, the industry, and finally the economy and the world as a whole.

As Gordon himself declares of the cut-throat environment in which computer innovation occurs, “It’s capitalism at its finest!” HFC depicts Randian heroes: businessmen, entrepreneurs, and creators fight against all odds in a race to change the world.

Now into its second season, the show is exploring the beginnings of the internet, and Cameron is running her own start-up company, Mutiny. I could go on about the outstanding production quality, but the real novelty here is a show where capitalists, entrepreneurs, and titans of industry are regarded as heroic.

Halt and Catch Fire is a brilliant show, but it isn’t wildly popular. I fear it may soon be canceled, so be sure to check it out while it’s still around.


Keith Farrell

Keith Farrell is a freelance writer and political commentator.

Nevada Passes Universal School Choice by Max Borders

People are becoming more conscious about animal welfare. The livestock, they say, shouldn’t be confined to factory farms — five by five — in such horrible conditions. These beings should be given more freedom to roam and to develop in a more natural way. They’re treated as mere chattel for the assembly line. It’s inhumane to keep them like this, they say — day after day, often in deplorable conditions.

Unfortunately, only a minority extends this kind of consciousness to human children. But that minority is growing, apparently.

Nevada is changing everything. According to the NRO,

Nevada governor Brian Sandoval [recently] signed into law the nation’s first universal school-choice program. That in and of itself is groundbreaking: The state has created an option open to every single public-school student.

Even better, this option improves upon the traditional voucher model, coming in the form of an education savings account (ESA) that parents control and can use to fully customize their children’s education.

Yes, school choice has often advanced through the introduction of vouchers and charter schools — which remain some of the most important reforms for breaking up the government education monopoly.

But vouchers were, to quote researcher Matthew Ladner, “the rotary telephones of our movement — an awesome technology that did one amazing thing.” States such as Nevada (and Arizona, Florida, Mississippi, and Tennessee) have implemented the iPhone of choice programs. They “still do that one thing well, but they also do a lot of other things,” Ladner notes.

So what’s the deal? What do parents and kids actually get out of this?

As of next year, parents in Nevada can have 90 percent (100 percent for children with special needs and children from low-income families) of the funds that would have been spent on their child in their public school deposited into a restricted-use spending account. That amounts to between $5,100 and $5,700 annually, according to the Friedman Foundation for Educational Choice.

Those funds are deposited quarterly onto a debit card, which parents can use to pay for a variety of education-related services and products — things such as private-school tuition, online learning, special-education services and therapies, books, tutors, and dual-enrollment college courses.

It’s an à la carte education, and the menu of options will be as hearty as the supply-side response — which, as it is whenever markets replace monopolies, is likely to be robust.

This is big news. Not merely because it is the most ambitious school choice measure yet passed, but also because it represents a very real opportunity to demonstrate the power of competitive forces to unleash entrepreneurship and innovation in the service of children.

When we compare such a bold measure to the status quo, it’s pretty groundbreaking. So it’s probably not the time to quibble about the ideological purity of such a policy.

But we should seriously consider the concerns of those who advocate full privatization, as opposed to tax and voucher reform.

Here are three things to keep an eye on:

  1. Nevadans have to remain vigilant that this doesn’t become an entree for regulators and incumbent crony schools to jack up the prices and mute the very market forces that will liberate teachers and kids.
    In other words, you don’t want to see what happened to health care (and, to some extent, higher education) happen to private education, just as low-income students finally have a chance to escape government-run schools.
  2. Nevadans have to ensure that cost spirals don’t infect the system due to cross-subsidy. This is what happened to the university system.
  3. Nevadans have to capitalize on the wiggle room quickly, by fundamentally disrupting the education market in such a profound way that it wards off the specter of those who are waiting to seize it back from parents and children.
    This can have spillover effects into other states, too, due to innovation and copycat entrepreneurship. (It might also attract a lot of parents to the state.)

Such alterations to the status quo should be welcome news to those who understand that freedom is not some ideal sitting atop Mt. Utopia.

This is a weak joint and a leverage point to unleash creative, tech-propelled market forces in a space that has been dominated by politics and unions and stifling bureaucracy.

There will be battles ahead on this front. But Nevada’s change is certainly cause for cautious celebration.


Max Borders

Max Borders is the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

Health Insurance Is Illegal by Warren C. Gibson

Health insurance is a crime. No, I’m not using a metaphor. I’m not saying it’s a mess, though it certainly is that. I’m saying it’s illegal to offer real health insurance in America. To see why, we need to understand what real insurance is and differentiate that from what we currently have.

Real insurance

Life is risky. When we pool our risks with others through insurance policies, we reduce the financial impact of unforeseen accidents or illness or premature death in return for a premium we willingly pay. I don’t regret the money I’ve spent on auto insurance during my first 55 years of driving, even though I’ve yet to file a claim.

Insurance originated among affinity groups such as churches or labor unions, but now most insurance is provided by large firms with economies of scale, some organized for profit and some not. Through trial and error, these companies have learned to reduce the problems of adverse selection and moral hazard to manageable levels.

A key word above is unforeseen.

If some circumstance is known, it’s not a risk and therefore cannot be the subject of genuine risk-pooling insurance. That’s why, prior to Obamacare, some insurance companies insisted that applicants share information about their physical condition. Those with preexisting conditions were turned down, invited to high-risk pools, or offered policies with higher premiums and higher deductibles.

Insurers are now forbidden to reject applicants due to preexisting conditions or to charge them higher rates.

They are also forbidden from charging different rates due to different health conditions — and from offering plans that exclude certain coverage items, many of which are not “unforeseen.”

In other words, it’s illegal to offer real health insurance.

Word games

Is all this just semantics? Not at all. What currently passes for health insurance in America is really just prepaid health care — on a kind of all-you-can-consume buffet card. The system is a series of cost-shifting schemes stitched together by various special interests. There is no price transparency. The resulting overconsumption makes premiums skyrocket, and health resources get misallocated relative to genuine wants and needs.

Lessons

Some lessons here are that genuine health insurance would offer enormous cost savings to ordinary people — and genuine benefits to policyholders. These plans would encourage thrift and consumer wisdom in health care planning,  while discouraging the overconsumption that makes prepaid health care unaffordable.

At this point, critics will object that private health insurance is a market failure because the refusal of unregulated private companies to insure preexisting conditions is a serious problem that can only be remedied by government coercion. The trouble with such claims is that no one knows what a real health insurance market would generate, particularly as the pre-Obamacare regime wasn’t anything close to being free.

What might a real, free-market health plan look like?

  • People would be able to buy less expensive plans from anywhere, particularly across state lines.
  • People would be able to buy catastrophic plans (real insurance) and set aside much more in tax-deferred medical savings accounts to use on out-of-pocket care.
  • People would very likely be able to buy noncancelable, portable policies to cover all unforeseen illnesses over the policyholder’s lifetime.
  • People would be able to leave costly coverage items off their policies — such as chiropractic or mental health — so that they could enjoy more affordable premiums.
  • People would not be encouraged by the tax code to get insurance through their employer.

What about babies born with serious conditions? Parents could buy policies to cover such problems prior to conception. What about parents whose genes predispose them to produce disabled offspring? They might have to pay more.

Of course, there will always be those who cannot or do not, for one reason or another, take such precautions. There is still a huge reservoir of charitable impulses and institutions in this country that could offer assistance. And these civil society organizations would be far more robust in a freer health care market.

The enemy of the good

Are these perfect solutions? By no means. Perfection is not possible, but market solutions compare very favorably to government solutions, especially over longer periods. Obamacare will continue to bring us unaccountable bureaucracies, shortages, rationing, discouraged doctors, and more.

Some imagine that prior to Obamacare, we had a free-market health insurance system, but the system was already severely hobbled by restrictions.

To name a few:

  • It was illegal to offer policies across state lines, which suppressed choices and increased prices, essentially cartelizing health insurance by state.
  • Employers were (and still are) given a tax break for providing health insurance (but not auto insurance) to their employees, reducing the incentive for covered employees to economize on health care while driving up prices for individual buyers. People stayed locked in jobs out of fear of losing health policies.
  • State regulators forbade policies that excluded certain coverage items, even if policyholders were amenable to such plans.
  • Many states made it illegal to price discriminate based on health status.
  • The law forbade associated health plans, which would allow organizations like churches or civic groups to pool risk and offer alternatives.
  • Medicaid and Medicare made up half of the health care system.

Of course, Obamacare fixed none of these problems.

Many voices are calling for the repeal of Obamacare, but few of those voices are offering the only solution that will work in the long term: complete separation of state and health care. That means no insurance regulation, no medical licensing, and ultimately, the abolition of Medicare and Medicaid, which threaten to wash future federal budgets in a sea of red ink.

Meanwhile, anything resembling real health insurance is illegal. And if you tried to offer it, they might throw you in jail.

Warren C. Gibson

Warren Gibson teaches engineering at Santa Clara University and economics at San Jose State University.

Los Angeles Pummels the Poor: A $15 an hour wage floor is a cruel and stupid policy by JEFFREY A. TUCKER

Does anyone on the Los Angeles City Council have a clue about what they have just done? It really is unclear whether reality matters in this legislative body. Rarely have we seen such jaw-dropping display of economic fallacy enacted into law.

The law under consideration here is a new wage floor of $15, phased in over five years. Why phased in? Why not do it now? Why not $30 or $150? Perhaps the implied reticence here illustrates just a bit of caution. Somewhere in the recesses of the councilors’ minds, they might have a lurking sense that there will be a price to pay for this.

Such doubt is wholly justified. Recall that the minimum wage was initially conceived as a method to exclude undesirables from the workforce. The hope, back in the time when eugenics was the rage, was that a wage floor would cause the “unemployable” to stop reproducing and die out in one generation.

Racism drove the policy, but it was hardly limited to that. The exterminationist ambition applied to anyone deemed unworthy of remunerative work.

“We have not reached the stage where we can proceed to chloroform them once and for all,” lamented the progressive economist Frank Taussig in his 1911 bookPrinciples of Economics. “What are the possibilities of employing at the prescribed wages all the healthy able-bodied who apply? The persons affected by such legislation would be those in the lowest economic and social group.”

Professor Taussig spoke for a generation of ruling-class intellectuals that had egregiously immoral visions of how to use government policy. But for all their evil intentions, at least they understood the basic economics of what they were doing. They knew what a wage floor excludes marginal workers, effectively dooming them to poverty — that’s precisely why they favored them.

Today, our situation seems reversed: an abundance of good intentions and a dearth of basic economic literacy. The mayor of LA, Eric Garcetti, was elated at the decision: “We’re leading the country; we’re not going to wait for Washington to lift Americans out of poverty.”

Leading the country, maybe, but where is another question. This is a policy that will, over time, lock millions out of the workforce and forces many businesses to cut their payrolls. Machines to replace workers will come at a premium. The remaining workers will be expected to become much more productive. Potential new business will face a higher bar than ever. Many enterprises will close or move.

As for the existing unemployed, they can forget it. Seriously. In fact, it is rather interesting that in all the hooplah about this change, there’s not been one word about the existing unemployed (officially, 7.5% of the city’s workforce). It’s as if everyone intuitively knows the truth here: this law will not help them at all, at least not if they want to work in the legal economy.

The underground economy, which is already massive in Los Angeles, will grow larger. New informal enterprises will pop up everywhere, doing a cash-only business. The long, brawny arm of the state will not be powerful enough to stop it. Sneaking around and hiding from the law is already a way of life for millions. Look for this tendency to become the dominant way of work for millions more.

All of this will happen, and yet the proponents of the minimum wage will still be in denial, for their commitment to the belief that laws can make wealth is doctrinal and essentially unfalsifiable.

As for those who know better, business owners all over the city pleaded for the Council not to do this. But their pleas fell on deaf ears. The Council had already been bought and paid for by the labor unions and interests that represent the already employed in Los Angeles. Such union rolls do not include the poor, the unemployed, or even many of the 50% of workers in the city who work for less than $15. They represent the working-class bourgeoisie: people rich enough to devote themselves to politics but do not actually own or run businesses.

Will such unions be helped by this law? Perhaps, a bit — but at whose expense? Those who work outside union protection.

This is a revealing insight into why unions have been so passionate about pushing for the minimum wage at all levels. Here is the truth you won’t read in the papers: a higher wage floor helps cartelize the labor market in their favor.

You can understand this by reflecting on your own employment. Let’s say that you earn $50,000 for a task that could possibly done by others for $25,000, and those people are submitting resumes. This is your situation, and it potentially applies to a dozen people in your workplace.

Let’s say you have the opportunity to enact a new policy for the firm: no one can be hired for less than $50,000 a year. Would this policy be good for you? In a perverse way, it would. Suddenly, nobody else, no matter how deserving, could underbid you or threaten your job. It’s a cruel way to go about padding your wallet, but it might work for a time.

Now imagine pushing this policy out to an entire city or an entire country. This would create an economic structure that (however temporarily) serves the interests of the politically connected at the expense of everyone else.

It certainly would not create wealth. It would not help the poor as a whole. And it would do nothing to create a dynamic and competitive marketplace. It would institutionalize stasis and cause innovation to stall and die.

The terrible effects are many and cascading, and much of the damage will be unseen in the form of business not formed, laborers not hired, efficiencies not realized. This is what the government of Los Angeles has done. It is a self-inflicted wound, performed in the name of health and well-being.

The City Council is cheering. So are the unions. So are the ghosts of the eugenists of the past who first fantasized about a labor force populated only by the kinds of people they approved.

As for everyone else, they will face a tougher road than ever.


Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.

Paul Krugman: Three Wrongs Don’t Make a Right by ROBERT P. MURPHY

One of the running themes throughout Paul Krugman’s public commentary since 2009 is that his Keynesian model — specifically, the old IS-LM framework — has done “spectacularly well” in predicting the major trends in the economy. Krugman actually claimed at one point that he and his allies had been “right about everything.” In contrast, Krugman claims, his opponents have been “wrong about everything.”

As I’ll show, Krugman’s macro predictions have been wrong in three key areas. So, by his own criterion of academic truth, Krugman’s framework has been a failure, and he should consider it a shame that people still seek out his opinion.

Modeling interest rates: the zero lower bound

Krugman’s entire case for fiscal stimulus rests on the premise that central banks can get stuck in a “liquidity trap” when interest rates hit the “zero lower bound” (ZLB). As long as nominal interest rates are positive, Krugman argued, the central bank could always stimulate more spending by loosening monetary policy and cutting rates further. These actions would boost aggregate demand and help restore full employment. In such a situation, there was no case for Keynesian deficit spending as a means to create jobs.

However, Krugman said that this conventional monetary policy lost traction early in the Great Recession once nominal short-term rates hit (basically) 0 percent. At that point, central banks couldn’t stimulate demand through open-market operations, and thus the government had to step in with a large fiscal stimulus in the form of huge budget deficits.

As is par for the course, Krugman didn’t express his views in a tone of civility or with humility. No, Krugman wrote things like this in response to Gary Becker:

Urp. Gack. Glug. If even Nobel laureates misunderstand the issue this badly, what hope is there for the general public? It’s not about the size of the multiplier; it’s about the zero lower bound….

And the reason we’re all turning to fiscal policy is that the standard rule, which is that monetary policy plus automatic stabilizers should do the work of smoothing the business cycle, can’t be applied when we’re hard up against the zero lower bound.

I really don’t know why this is so hard to understand. (emphasis added)

But then, in 2015, things changed: various bonds in Europe began exhibiting negative nominal yields. Here’s how liberal writer Matt Yglesias — no right-wing ideologue — described this development in late February:

Indeed, the interest rate situation in Europe is so strange that until quite recently, it was thought to be entirely impossible. There was a lot of economic theory built around the problem of the Zero Lower Bound — the impossibility of sustained negative interest rates…. Paul Krugman wrote a lot of columns about it. One of them said “the zero lower bound isn’t a theory, it’s a fact, and it’s a fact that we’ve been facing for five years now.”

And yet it seems the impossible has happened. (emphasis added)

Now this is quite astonishing, the macroeconomic analog of physicists accelerating particles beyond the speed of light. If it turns out that the central banks of the world had more “ammunition” in terms of conventional monetary policy, then even on its own terms, the case for Keynesian fiscal stimulus becomes weaker.

So what happened with this revelation? Once he realized he had been wrong to declare so confidently that 0 percent was a lower bound on rates, did Krugman come out and profusely apologize for putting so much of his efforts into pushing fiscal stimulus rather than further rate cuts, since the former were a harder sell politically?

Of course not. This is how Krugman first dealt with the subject in early March when it became apparent that the “ZLB” was a misnomer:

We now know that interest rates can, in fact, go negative; those of us who dismissed the possibility by saying that people could simply hold currency were clearly too casual about it. But how low?

Then, after running through other people’s estimates, Krugman wrapped up his post by saying, “And I am pinching myself at the realization that this seemingly whimsical and arcane discussion is turning out to have real policy significance.”

Isn’t that cute? The foundation for the Keynesian case for fiscal stimulus rests on an assumption that interest rates can’t go negative. Then they do go negative, and Krugman is pinching himself that he gets to live in such exciting times. I wonder, is that the reaction Krugman wanted from conservative economists when interest rates failed to spike despite massive deficits — namely, that they would just pinch themselves to see that their wrong statements about interest rates were actually relevant to policy?

I realize some readers may think I’m nitpicking here, because (thus far) it seems that maybe central banks can push interest rates only 50 basis points or so beneath the zero bound. Yet, in practice, that result would still be quite significant, if we are operating in the Keynesian framework. It’s hard to come up with a precise estimate, but using the Taylor Principle in reverse, and then invoking Okun’s Law, a typical Keynesian might agree that the Fed pushing rates down to –0.5 percent, rather than stopping at 0 percent, would have reduced unemployment during the height of the recession by 0.5 percentage points.

That might not sound like a lot, but it corresponds to about 780,000 workers. For some perspective, in February 2013, Krugman estimated that the budget sequester would cost about 700,000 jobs, and classified it as a “fiscal doomsday machine” and “one of the worst policy ideas in our nation’s history.” So if my estimate is in the right ballpark, then on his own terms, Krugman should admit that his blunder — in thinking the Fed couldn’t push nominal interest rates below 0 percent — is one of the worst mistakes by an economist in US history. If he believes his own model and rhetoric, Krugman should be doing a lot more than pinching himself.

Modeling growth: fiscal stimulus and budget austerity

Talk of the so-called “sequester” leads into the next sorry episode in Krugman’s track record: he totally botched his forecasts of US economic growth (and employment) after the turn to (relative) US fiscal restraint. Specifically, in April 2013, Krugman threw down the gauntlet, arguing that we were being treated to a test between the Keynesian emphasis on fiscal policy and the market monetarist emphasis on monetary policy. Guys like Mercatus Center monetary economist Scott Sumner had been arguing that the Fed could offset Congress’s spending cuts, while Krugman — since he was still locked into the “zero lower bound” and “liquidity trap” mentality — said that this was wishful thinking. That’s why Krugman had labeled the sequester a “fiscal doomsday machine,” after all.

As it turned out, the rest of 2013 delivered much better economic news than Krugman had been expecting. Naturally, the market monetarists were running victory laps by the end of the year. Then, in a move that would embarrass anybody else, in January 2014 Krugman had the audacity to wag his finger at Sumner for thinking that the previous year’s economy was somehow a test of Keynesian fiscal stimulus versus market monetarist monetary stimulus. Yes, you read that right: back in April 2013 when the economy was doing poorly, Krugman said 2013 would be a good test of the two viewpoints. Then, when he failed the test he himself had set up, Krugman complained that it obviously wasn’t a fair test, because all sorts of other things can occur to offset the theoretical impacts. (I found the episode so inspiring that I wrote a play about it.)

Things became even more comical by the end of 2014, when it was clear that the US economy — at least according to conventional metrics like the official unemployment rate and GDP growth — was doing much better than Krugman’s doomsday rhetoric would have anticipated. At this point, rather than acknowledging how wrong his warnings about US “austerity” had been, Krugman inconceivably tried to claim victory — by arguing that all of the conservative Republican warnings about Obamacare had been wrong.

This rhetorical move was so shameless that not just anti-Keynesians like Sumner but even progressives had to cry foul. Specifically, Jeffrey Sachs wrote a scathing article showcasing Krugman’s revisionism:

For several years…Paul Krugman has delivered one main message to his loyal readers: deficit-cutting “austerians” (as he calls advocates of fiscal austerity) are deluded. Fiscal retrenchment amid weak private demand would lead to chronically high unemployment. Indeed, deficit cuts would court a reprise of 1937, when Franklin D. Roosevelt prematurely reduced the New Deal stimulus and thereby threw the United States back into recession.

Well, Congress and the White House did indeed play the austerian card from mid-2011 onward. The federal budget deficit has declined from 8.4% of GDP in 2011 to a predicted 2.9% of GDP for all of 2014.…

Krugman has vigorously protested that deficit reduction has prolonged and even intensified what he repeatedly calls a “depression” (or sometimes a “low-grade depression”). Only fools like the United Kingdom’s leaders (who reminded him of the Three Stooges) could believe otherwise.

Yet, rather than a new recession, or an ongoing depression, the US unemployment rate has fallen from 8.6% in November 2011 to 5.8% in November 2014. Real economic growth in 2011 stood at 1.6%, and theIMF expects it to be 2.2% for 2014 as a whole. GDP in the third quarter of 2014 grew at a vigorous 5% annual rate, suggesting that aggregate growth for all of 2015 will be above 3%.

So much for Krugman’s predictions. Not one of his New York Timescommentaries in the first half of 2013, when “austerian” deficit cutting was taking effect, forecast a major reduction in unemployment or that economic growth would recover to brisk rates. On the contrary, “the disastrous turn toward austerity has destroyed millions of jobs and ruined many lives,”he argued, with the US Congress exposing Americans to “the imminent threat of severe economic damage from short-term spending cuts.” As a result, “Full recovery still looks a very long way off,” he warned. “And I’m beginning to worry that it may never happen.”

I raise all of this because Krugman took a victory lap in his end-of-2014 column on “The Obama Recovery.” The recovery, according to Krugman, has come not despite the austerity he railed against for years, but because we “seem to have stopped tightening the screws….”

That is an incredible claim. The budget deficit has been brought down sharply, and unemployment has declined. Yet Krugman now says that everything has turned out just as he predicted. (emphasis added)

In the face of such withering and irrefutable criticism, Krugman retreated to the position that his wonderful model had been vindicated by the bulk of the sample, with scatterplots of European countries and their respective fiscal stance and growth rates. He went so far as to say that Sachs “really should know better” than to have expected Krugman’s predictions about austerity to actually hold for any given country (such as the United States).

Besides the audacity of downplaying the confidence with which he had warned of the “fiscal doomsday machine” that would strike the United States, Krugman’s response to Sachs also drips with hypocrisy. Krugman has been merciless in pointing to specific economists (including yours truly) who were wrong in their predictions about consumer price inflation in the United States. When we botched a specific call about the US economy for a specific time period, that was enough in Krugman’s book for us to quit our day jobs and start delivering pizza. There was no question that getting things wrong about one specific country was enough to discredit our model of the economy. The fact that guys like me clung to our policy views after being wrong about our predictions on the United States showed that not only were we bad economists, but we were evil (and possibly racist), too.

Modeling consumer price inflation

I’ve saved the best for last. The casual reader of Krugman’s columns would think that the one area where he surely wiped the deck with his foes was on predictions of consumer price inflation. After all, plenty of anti-Keynesians like me predicted that the consumer price index (among other prices) would rise rapidly, and we were wrong. So Krugman’s model did great on this criterion, right?

Actually, no, it didn’t; his model was totally wrong as well. You see, coming into the Great Recession, Krugman’s framework of “the inflation-adjusted Phillips curve predict[ed] not just deflation, but accelerating deflation in the face of a really prolonged economic slump” (emphasis Krugman’s). And it wasn’t merely the academic model predicting (price) deflation; Krugman himself warned in February 2010 that the United States could experience price deflation in the near future. He ended with, “Japan, here we come” — a reference to that country’s long bout with actual consumer price deflation.

Well, that’s not what happened. About seven months after he warned of continuing price disinflation and the possibility of outright deflation, Krugman’s preferred measures of CPI turned around sharply, more than doubling in a short period, returning almost to pre-recession levels.

Conclusion

Krugman, armed with his Keynesian model, came into the Great Recession thinking that (a) nominal interest rates can’t go below 0 percent, (b) total government spending reductions in the United States amid a weak recovery would lead to a double dip, and (c) persistently high unemployment would go hand in hand with accelerating price deflation. Because of these macroeconomic views, Krugman recommended aggressive federal deficit spending.

As things turned out, Krugman was wrong on each of the above points: we learned (and this surprised me, too) that nominal rates could go persistently negative, that the US budget “austerity” from 2011 onward coincided with a strengthening recovery, and that consumer prices rose modestly even as unemployment remained high. Krugman was wrong on all of these points, and yet his policy recommendations didn’t budge an iota over the years.

Far from changing his policy conclusions in light of his model’s botched predictions, Krugman kept running victory laps, claiming his model had been “right about everything.” He further speculated that the only explanation for his opponents’ unwillingness to concede defeat was that they were evil or stupid.

What a guy. What a scientist.


Robert P. Murphy

Robert P. Murphy is senior economist with the Institute for Energy Research. He is author of Choice: Cooperation, Enterprise, and Human Action (Independent Institute, 2015).

What Do the Tesla and the Model-T Have in Common? by George C. Leef

Henry Ford did a lot for the automobile in America. What everyone knows is that he figured out how to improve manufacturing efficiency so much that the auto was transformed from a toy for the rich into an item that ordinary people could afford.

(Nothing really extraordinary in that, by the way. As Ludwig von Mises wrote in The Anti-Capitalist Mentality“Under capitalism the common man enjoys amenities which in ages gone by were unknown and therefore inaccessible even to the richest people.”)

But very few people know that Ford had to fight against a cartel to be allowed to sell his vehicles. In this 2001 article published in The Freeman“How Henry Ford Zapped a Licensing Monopoly,” Melvin Barger goes into the fascinating history of Ford’s legal battle against the Association of Licensed Automobile Manufacturers (ALAM).

In 1895, an inventor named George Selden had received a patent for a gasoline powered automobile. That patent was later acquired by ALAM, which then said to everyone who wanted to sell a gasoline powered car, “You must pay us royalties for the privilege of selling such vehicles and if you sell without our license, we’ll take you to court for patent infringement.”

Ford had developed his auto without any knowledge of Selden’s patent and saw no reason why he shouldn’t be free to make and sell cars without paying ALAM for the right to do so.

So Ford thumbed his nose at ALAM and sold his cars without paying royalties. ALAM naturally sued him in an effort to keep its cartel going. The legal battles lasted from 1903 to 1911, when a federal appeals court ruled that the Selden patent only applied to vehicles made to its exact specifications. (That had actually been tried, with dismal results.) Ford therefore did not owe ALAM anything. He was free to continue putting his capital into making cars the public wanted without diverting even a dollar to appeasing a group of rent-seekers.

Turn the clock ahead a century, and we find that an innovative car company faces similar obstacles.

Substitute Elon Musk for Henry Ford and Tesla for Model-T and state dealer regulation for an extortionate patent scheme, but the stories are largely the same. ALAM didn’t want competition that might break up its cartel and neither does the established auto dealer system want innovative marketing upsetting its business.

In their January 2015 Mercatus Center paper “State Franchise Law Carjacks Auto Buyers,” Jerry Ellig and Jesse Martinez discuss the way established dealers have used their lobbying clout to stifle competition.

This post first appeared on Forbes.com.

George C. Leef

George Leef is the former book review editor of The Freeman. He is director of research at the John W. Pope Center for Higher Education Policy.

Real Heroes: Ludwig Erhard — The Man Who Made Lemonade from Lemons by LAWRENCE W. REED

How rare and refreshing it is for the powerful to understand the limitations of power, to actually repudiate its use and, in effect, give it back to the myriad individuals who make up society. George Washington was such a person. Cicero was another. So was Ludwig Erhard, who did more than any other man or woman to denazify the German economy after World War II. By doing so, he gave birth to a miraculous economic recovery.

“In my eyes,” Erhard confided in January 1962, “power is always dull, it is dangerous, it is brutal and ultimately even dumb.”

By every measure, Germany was a disaster in 1945 — defeated, devastated, divided, and demoralized — and not only because of the war. The Nazis, of course, were socialist (the name derives from National Socialist German Workers Party), so for more than a decade, the economy had been “planned” from the top. It was tormented with price controls, rationing, bureaucracy, inflation, cronyism, cartels, misdirection of resources, and government command of important industries. Producers made what the planners ordered them to. Service to the state was the highest value.

Thirty years earlier, a teenage Ludwig Erhard heard his father argue for classical-liberal values in discussions with fellow businessmen. A Bavarian clothing and dry goods entrepreneur, the elder Wilhelm actively opposed the kaiser’s increasing cartelization of the German economy. Erhard biographer Alfred C. Mierzejewski writes of Ludwig’s father,

While by no means wealthy, he became a member of the solid middle class that made its living through hard work and satisfying the burgeoning consumer demand of the period, rather than by lobbying for government subsidies or protection as many Junkers did to preserve their farms and many industrialists did to fend off foreign competition.

Young Ludwig resented the burdens that government imposed on honest and independent businessmen like his father. He developed a lifelong passion for free market competition because he understood what F.A. Hayek would express so well in the 1940s: “The more the state plans, the more difficult planning becomes for the individual.”

Severely wounded by an Allied artillery shell in Belgium in 1918, Ludwig’s liberal values were strengthened by his experience in the bloody and futile First World War. After the tumultuous hyperinflation that gripped Germany in the years after the war, he earned a PhD in economics, took charge of the family business, and eventually headed a marketing research institute, which gave him opportunities to write and speak about economic issues.

Hitler’s rise to power in the 1930s deeply disturbed Erhard. He refused to have anything to do with Nazism or the Nazi Party, even quietly supporting resistance to the regime as the years wore on. The Nazis saw to it that he lost his job in 1942, when he wrote a paper outlining his ideas for a free, postwar economy. He spent the next few years as a business consultant.

In 1947, Erhard achieved the chairmanship of an important monetary commission. It proved to be a vital stepping stone to the position of director of economics for the Bizonal Economic Council, a creation of the American and British occupying authorities. It was there that he could finally put his views into policy and transform his country in the process.

Erhard’s beliefs had by this time solidified into unalterable convictions. Currency must be sound and stable. Collectivism was deadly nonsense that choked the creative individual. Central planning was a ruse and a delusion. State enterprises could never be an acceptable substitute for the dynamism of competitive, entrepreneurial markets. Envy and wealth redistribution were evils.

“It is much easier to give everyone a bigger piece from an ever growing cake,” he said, “than to gain more from a struggle over the division of a small cake, because in such a process every advantage for one is a disadvantage for another.”

Erhard advocated a fair field and no favors. His prescription for recovery? The state would set the rules of the game and otherwise leave people alone to wrench the German economy out of its doldrums. The late economist William H. Peterson reveals what happened next:

In 1948, on a June Sunday, without the knowledge or approval of the Allied military occupation authorities (who were of course away from their offices), West German Economics Minister Ludwig Erhard unilaterally and bravely issued a decree wiping out rationing and wage-price controls and introducing a new hard currency, the Deutsche-mark. The decree was effective immediately. Said Erhard to the stunned German people: “Now your only ration coupon is the mark.”

The American, British, and French authorities, who had appointed Erhard to his post, were aghast. Some charged that he had exceeded his defined powers, that he should be removed. But the deed was done. Said U.S. Commanding General Lucius Clay: “Herr Erhard, my advisers tell me you’re making a terrible mistake.” “Don’t listen to them, General,” Erhard replied, “my advisers tell me the same thing.”

General Clay protested that Erhard had “altered” the Allied price-control program, but Erhard insisted he hadn’t altered price controls at all. He had simply “abolished” them. In the weeks and months to follow, he issued a blizzard of deregulatory orders. He slashed tariffs. He raised consumption taxes, but more than offset them with a 15 percent cut in income taxes. By removing disincentives to save, he prompted one of the highest saving rates of any Western industrialized country. West Germany was awash in capital and growth, while communist East Germany languished. Economist David Henderson writes that Erhard’s motto could have been: “Don’t just sit there;undo something.”

The results were stunning. As Robert A. Peterson writes,

Almost immediately, the German economy sprang to life. The unemployed went back to work, food reappeared on store shelves, and the legendary productivity of the German people was unleashed. Within two years, industrial output tripled. By the early 1960s, Germany was the third greatest economic power in the world. And all of this occurred while West Germany was assimilating hundreds of thousands of East German refugees.

It was a pace of growth that dwarfed that of European countries that received far more Marshall Plan aid than Germany ever did.

The term “German economic miracle” was widely used and understood as it happened in the 1950s before the eyes of the world, but Erhard himself never thought of it as such. In his 1958 book, Prosperity through Competition, he opined, “What has taken place in Germany … is anything but a miracle. It is the result of the honest efforts of a whole people who, in keeping with the principles of liberty, were given the opportunity of using personal initiative and human energy.”

The temptations of the welfare state in the 1960s derailed some of Erhard’s reforms. His three years as chancellor (1963–66) were less successful than his tenure as an economics minister. But his legacy was forged in that decade and a half after the war’s end. He forever answered the question, “What do you do with an economy in ruins?” with the simple, proven and definitive recipe: “Free it.”

For additional information, see:

David R. Henderson on the “German Economic Miracle
Alfred C. Mierzejewski’s Ludwig Erhard: A Biography
Robert A. Peterson on “Origins of the German Economic Miracle
Richard Ebeling on “The German Economic Miracle and the Social Market Economy
William H. Peterson on “Will More Dollars Save the World?

Lawrence W. Reed

Lawrence W. (“Larry”) Reed became president of FEE in 2008 after serving as chairman of its board of trustees in the 1990s and both writing and speaking for FEE since the late 1970s.

EDITORS NOTE: Each week, Mr. Reed will relate the stories of people whose choices and actions make them heroes. See the table of contents for previous installments.

Razing the Bar: The bar exam protects a cartel of lawyers, not their clients by Allen Mendenhall

The bar exam was designed and continues to operate as a mechanism for excluding the lower classes from participation in the legal services market. Elizabeth Olson of the New York Times reports that the bar exam as a professional standard “is facing a new round of scrutiny — not just from the test takers but from law school deans and some state legal establishments.”

This is a welcome development.

Testing what, exactly?

The dean of the University of San Diego School of Law, Stephen C. Ferrulo, complains to the Times that the bar exam “is an unpredictable and unacceptable impediment for accessibility to the legal profession.” Ferrulo is right: the bar exam is a barrier to entry, a form of occupational licensure that restricts access to a particular vocation and reduces market competition.

The bar exam tests the ability to take tests, not the ability to practice law. The best way to learn the legal profession is through tried experience and practical training, which, under our current system, are delayed for years, first by the requirement that would-be lawyers graduate from accredited law schools and second by the bar exam and its accompanying exam for professional fitness.

Freedom of contract

The 19th-century libertarian writer Lysander Spooner, himself a lawyer, opposed occupational licensure as a violation of the freedom of contract, arguing that, once memorialized, all agreements between mutually consenting parties “should not be subjects of legislative caprice or discretion.”

“Men may exercise at discretion their natural rights to enter into all contracts whatsoever that are in their nature obligatory,” he wrote, adding that this principle would prohibit all laws “forbidding men to make contracts by auction without license.”

In more recent decades, Milton Friedman disparaged occupational licensure as “another example of governmentally created and supported monopoly on the state level.” For Friedman, occupational licensure was no small matter. “The overthrow of the medieval guild system,” he said, was an indispensable early step in the rise of freedom in the Western world. It was a sign of the triumph of liberal ideas.… In more recent decades, there has been a retrogression, an increasing tendency for particular occupations to be restricted to individuals licensed to practice them by the state.

The bar exam is one of the most notorious examples of this “increasing tendency.”

Protecting lawyers from the poor

The burden of the bar exam falls disproportionately on low-income earners and ethnic minorities who lack the ability to pay for law school or to assume heavy debts to earn a law degree. Passing a bar exam requires expensive bar-exam study courses and exam fees, to say nothing of the costly applications and paperwork that must be completed in order to be eligible to sit for the exam. The average student-loan debt for graduates of many American law schools now exceeds $150,000, while half of all lawyers make less than $62,000 per year, a significant drop since a decade ago.

Recent law-school graduates do not have the privilege of reducing this debt after they receive their diploma; they must first spend three to four months studying for a bar exam and then, having taken the exam, must wait another three to four months for their exam results. More than half a year is lost on spending and waiting rather than earning, or at least earning the salary of a licensed attorney (some graduates work under the direction of lawyers pending the results of their bar exam).

When an individual learns that he or she has passed the bar exam, the congratulations begin with an invitation to pay a licensing fee and, in some states, a fee for a mandatory legal-education course for newly admitted attorneys. These fees must be paid before the individual can begin practicing law.

The exam is working — but for whom?

What’s most disturbing about this system is that it works precisely as it was designed to operate.  State bar associations and bar exams are products of big-city politics during the Progressive Era. Such exams existed long before the Progressive Era — Delaware’s bar exam dates back to 1763 — but not until the Progressive Era were they increasingly formalized and institutionalized and backed by the enforcement power of various states.

Threatened by immigrant workers and entrepreneurs who were determined to earn their way out of poverty and obscurity, lawyers with connections to high-level government officials in their states sought to form guilds to prohibit advertising and contingency fees and other creative methods for gaining clients and driving down the costs of legal services. Establishment lawyers felt the entrepreneurial up-and-comers were demeaning the profession and degrading the reputation of lawyers by transforming the practice of law into a business industry that admitted ethnic minorities and others who lacked rank and class. Implementing the bar exam allowed these lawyers to keep allegedly unsavory people and practices out of the legal community and to maintain the high costs of fees and services.

Protecting the consumer

In light of this ugly history, the paternalistic response of Erica Moeser to the New York Times is particularly disheartening. Moeser is the president of the National Conference of Bar Examiners. She says that the bar exam is “a basic test of fundamentals” that is justified by “protecting the consumer.” But isn’t it the consumer above all who is harmed by the high costs of legal services that are a net result of the bar exam and other anticompetitive practices among lawyers? To ask the question is to answer it. It’s also unclear how memorizing often-archaic rules to prepare for standardized, high-stakes multiple-choice tests that are administered under stressful conditions will in any way improve one’s ability to competently practice law.

The legal community and consumers of legal services would be better served by the apprenticeship model that prevailed long before the rise of the bar exam. Under this model, an aspiring attorney was tutored by experienced lawyers until he or she mastered the basics and demonstrated his or her readiness to represent clients. The high cost of law school was not a precondition; young people spent their most energetic years doing real work and gaining practical knowledge. Developing attorneys had to establish a good reputation and keep their costs and fees to a minimum to attract clients, gain trust, and maintain a living.

The rise in technology and social connectivity in our present era also means that reputation markets have improved since the early 20th century, when consumers would have had a more difficult time learning by word-of-mouth and secondhand report that one lawyer or group of lawyers consistently failed their clients — or ripped them off. Today, with services like Amazon, eBay, Uber, and Airbnb, consumers are accustomed to evaluating products and service providers online and for wide audiences.  Learning about lawyers’ professional reputations should be quick and easy, a matter of a simple Internet search.  With no bar exam, the sheer ubiquity and immediacy of reputation markets could weed out the good lawyers from the bad, thereby transferring the mode of social control from the legal cartel to the consumers themselves.

Criticism of the high costs of legal bills has not gone away in recent years, despite the drop in lawyers’ salaries and the saturation of the legal market with too many attorneys. The quickest and easiest step toward reducing legal costs is to eliminate bar exams. The public would see no marked difference in the quality of legal services if the bar exam were eliminated, because, among other things, the bar exam doesn’t teach or test how to deliver those legal services effectively.

It will take more than just the grumbling of anxious, aspiring attorneys to end bar-exam hazing rituals. That law school deans are realizing the drawbacks of the bar exam is a step in the right direction. But it will require protests from outside the legal community — from the consumers of legal services — to effect any meaningful change.

Allen Mendenhall

Allen Mendenhall is the author of Literature and Liberty: Essays in Libertarian Literary Criticism (Rowman & Littlefield / Lexington Books, 2014). Visit his website at AllenMendenhall.com.

Decentralization: Why Dumb Networks Are Better

The smart choice is innovation at the edge by ANDREAS ANTONOPOULOS…

“Every device employed to bolster individual freedom must have as its chief purpose the impairment of the absoluteness of power.” — Eric Hoffer

In computer and communications networks, decentralization leads to faster innovation, greater openness, and lower cost. Decentralization creates the conditions for competition and diversity in the services the network provides.

But how can you tell if a network is decentralized, and what makes it more likely to be decentralized? Network “intelligence” is the characteristic that differentiates centralized from decentralized networks — but in a way that is surprising and counterintuitive.

Some networks are “smart.” They offer sophisticated services that can be delivered to very simple end-user devices on the “edge” of the network. Other networks are “dumb” — they offer only a very basic service and require that the end-user devices are intelligent. What’s smart about dumb networks is that they push innovation to the edge, giving end-users control over the pace and direction of innovation. Simplicity at the center allows for complexity at the edge, which fosters the vast decentralization of services.

Surprisingly, then, “dumb” networks are the smart choice for innovation and freedom.

The telephone network used to be a smart network supporting dumb devices (telephones). All the intelligence in the telephone network and all the services were contained in the phone company’s switching buildings. The telephone on the consumer’s kitchen table was little more than a speaker and a microphone. Even the most advanced touch-tone telephones were still pretty simple devices, depending entirely on the network services they could “request” through beeping the right tones.

In a smart network like that, there is no room for innovation at the edge. Sure, you can make a phone look like a cheeseburger or a banana, but you can’t change the services it offers. The services depend entirely on the central switches owned by the phone company. Centralized innovation means slow innovation. It also means innovation directed by the goals of a single company. As a result, anything that doesn’t seem to fit the vision of the company that owns the network is rejected or even actively fought.

In fact, until 1968, AT&T restricted the devices allowed on the network to a handful of approved devices. In 1968, in a landmark decision, the FCC ruled in favor of the Carterfone, an acoustic coupler device for connecting two-way radios to telephones, opening the door for any consumer device that didn’t “cause harm to the system.”

That ruling paved the way for the answering machine, the fax machine, and the modem. But even with the ability to connect smarter devices to the edge, it wasn’t until the modem that innovation really accelerated. The modem represented a complete inversion of the architecture: all the intelligence was moved to the edge, and the phone network was used only as an underlying “dumb” network to carry the data.

Did the telecommunications companies welcome this development? Of course not! They fought it for nearly a decade, using regulation, lobbying, and legal threats against the new competition. In some countries, modem calls across international lines were automatically disconnected to prevent competition in the lucrative long-distance market. In the end, the Internet won. Now, almost the entire phone network runs as an app on top of the Internet.

The Internet is a dumb network, which is its defining and most valuable feature. The Internet’s protocol (transmission control protocol/Internet protocol, or TCP/IP) doesn’t offer “services.” It doesn’t make decisions about content. It doesn’t distinguish between photos and text, video and audio. It doesn’t have a list of approved applications. It doesn’t even distinguish between client and server, user and host, or individual versus corporation. Every IP address is an equal peer.

TCP/IP acts as an efficient pipeline, moving data from one point to another. Over time, it has had some minor adjustments to offer some differentiated “quality of service” capabilities, but other than that, it remains, for the most part, a dumb data pipeline. Almost all the intelligence is on the edge — all the services, all the applications are created on the edge-devices. Creating a new application does not involve changing the network. The Web, voice, video, and social media were all created as applications on the edge without any need to modify the Internet protocol.

So the dumb network becomes a platform for independent innovation, without permission, at the edge. The result is an incredible range of innovations, carried out at an even more incredible pace. People interested in even the tiniest of niche applications can create them on the edge. Applications that only have two participants only need two devices to support them, and they can run on the Internet. Contrast that to the telephone network where a new “service,” like caller ID, had to be built and deployed on every company switch, incurring maintenance cost for every subscriber. So only the most popular, profitable, and widely used services got deployed.

The financial services industry is built on top of many highly specialized and service-specific networks. Most of these are layered atop the Internet, but they are architected as closed, centralized, and “smart” networks with limited intelligence on the edge.

Take, for example, the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the international wire transfer network. The consortium behind SWIFT has built a closed network of member banks that offers specific services: secure messages, mostly payment orders. Only banks can be members, and the network services are highly centralized.

The SWIFT network is just one of dozens of single-purpose, tightly controlled, and closed networks offered to financial services companies such as banks, brokerage firms, and exchanges. All these networks mediate the services by interposing the service provider between the “users,” and they allow minimal innovation or differentiation at the edge — that is, they are smart networks serving mostly dumb devices.

Bitcoin is the Internet of money. It offers a basic dumb network that connects peers from anywhere in the world. The bitcoin network itself does not define any financial services or applications. It doesn’t require membership registration or identification. It doesn’t control the types of devices or applications that can live on its edge. Bitcoin offers one service: securely time-stamped scripted transactions. Everything else is built on the edge-devices as an application. Bitcoin allows any application to be developed independently, without permission, on the edge of the network. A developer can create a new application using the transactional service as a platform and deploy it on any device. Even niche applications with few users — applications never envisioned by the bitcoin protocol creator — can be built and deployed.

Almost any network architecture can be inverted. You can build a closed network on top of an open network or vice versa, although it is easier to centralize than to decentralize. The modem inverted the phone network, giving us the Internet. The banks have built closed network systems on top of the decentralized Internet. Now bitcoin provides an open network platform for financial services on top of the open and decentralized Internet. The financial services built on top of bitcoin are themselves open because they are not “services” delivered by the network; they are “apps” running on top of the network. This arrangement opens a market for applications, putting the end user in a position of power to choose the right application without restrictions.

What happens when an industry transitions from using one or more “smart” and centralized networks to using a common, decentralized, open, and dumb network? A tsunami of innovation that was pent up for decades is suddenly released. All the applications that could never get permission in the closed network can now be developed and deployed without permission. At first, this change involves reinventing the previously centralized services with new and open decentralized alternatives. We saw that with the Internet, as traditional telecommunications services were reinvented with email, instant messaging, and video calls.

This first wave is also characterized by disintermediation — the removal of entire layers of intermediaries who are no longer necessary. With the Internet, this meant replacing brokers, classified ads publishers, real estate agents, car salespeople, and many others with search engines and online direct markets. In the financial industry, bitcoin will create a similar wave of disintermediation by making clearinghouses, exchanges, and wire transfer services obsolete. The big difference is that some of these disintermediated layers are multibillion dollar industries that are no longer needed.

Beyond the first wave of innovation, which simply replaces existing services, is another wave that begins to build the applications that were impossible with the previous centralized network. The second wave doesn’t just create applications that compare to existing services; it spawns new industries on the basis of applications that were previously too expensive or too difficult to scale. By eliminating friction in payments, bitcoin doesn’t just make better payments; it introduces market mechanisms and price discovery to economic activities that were too small or inefficient under the previous cost structure.

We used to think “smart” networks would deliver the most value, but making the network “dumb” enabled a massive wave of innovation. Intelligence at the edge brings choice, freedom, and experimentation without permission. In networks, “dumb” is better.

ABOUT ANDREAS ANTONOPOULOS

Andreas M. Antonopoulos is a technologist and serial entrepreneur who advises companies on the use of technology and decentralized digital currencies such as bitcoin.

The Garage That Couldn’t Be Boarded Up Uber and the jitney … everything old is new again by SARAH SKWIRE

August Wilson. Jitney. 1979.

Last December, I used Uber for the first time. I downloaded the app onto my phone, entered my name, location, and credit card number, and told them where my daughters and I needed to go. The driver picked us up at my home five minutes later. I was able to access reviews that other riders had written for the same driver, to see a photograph of him and of the car that he would be using to pick me up, and to pay and tip him without juggling cash and credit cards and my two kids. Like nearly everyone else I know, I instantly became a fan of this fantastic new invention.

In January, I read Thomas Sowell’s Knowledge and Decisions for the first time. In chapter 8, Sowell discusses the early 20th-century rise of “owner operated bus or taxi services costing five cents and therefore called ‘jitneys,’ the current slang for nickels.” Sowell takes his fuller description of jitneys from transportation economist George W. Hilton’s “American Transportation Planning.”

The jitneys … essentially provided a competitive market in urban transportation with the usual characteristics of rapid entry and exit, quick adaptation to changes in demand, and, in particular,  excellent adaptation to peak load demands. Some 60 percent of the jitneymen were part-time operators, many of whom simply carried passengers for a nickel on trips between home and work.

It sounded strangely familiar.

In February, I read August Wilson’s play, Jitney, written in 1979, about a jitney car service operating in Pittsburgh in the 1970s. As we watch the individual drivers deal with their often tumultuous personal relationships, we also hear about their passengers. The jitney drivers take people to work, to the grocery store, to the pawnshop, to the bus station, and on a host of other unspecified errands. They are an integral part of the community. Like the drivers in Sean Malone’s documentary No Van’s Land, they provide targeted transportation services to a neighborhood under served by public transportation. We see the drivers in Jitney take pride in the way they fit into and take care of their community.

If we gonna be running jitneys out of here we gonna do it right.… I want all the cars inspected. The people got a right if you hauling them around in your car to expect the brakes to work. Clean out your trunk. Clean out the interior of your car. Keep your car clean. The people want to ride in a clean car. We providing a service to the community. We ain’t just giving rides to people. We providing a service.

That service is threatened when the urban planners and improvers at the Pittsburgh Renewal Council decide to board up the garage out of which the jitney service operates and much of the surrounding neighborhood. The drivers are skeptical that the improvements will ever really happen.

Turnbo: They supposed to build a new hospital down there on Logan Street. They been talking about that for the longest while. They supposed to build another part of the Irene Kaufman Settlement House to replace the part they tore down. They supposed to build some houses down on Dinwidee.

Becker: Turnbo’s right. They supposed to build some houses but you ain’t gonna see that. You ain’t gonna see nothing but the tear-down. That’s all I ever seen.

The drivers resolve, in the end, to call a lawyer and refuse to be boarded up. “We gonna run jitneys out of here till the day before the bulldozer come. Ain’t gonna be no boarding up around here! We gonna fight them on that.” They know that continuing to operate will allow other neighborhood businesses to stay open as well. They know that the choice they are offered is not between an improved neighborhood and an unimproved one, but between an unimproved neighborhood and no neighborhood at all. They know that their jitney service keeps their neighborhood running and that it improves the lives of their friends and neighbors in a way that boarded up buildings and perpetually incomplete urban planning projects never will.

Reading Sowell’s book and Wilson’s play in such close proximity got me thinking. Uber isn’t a fantastic new idea. It’s a fantastic old idea that has returned because the omnipresence of smartphones has made running a jitney service easier and more effective. Uber drivers and other ride-sharing services, as we have all read and as No Van’s Land demonstrates so effectively, are subject to protests and interference by competitors, to punitive regulation from local governments, and to a host of other challenges to their enterprise. This push back is nothing new. Sowell notes, “The jitneys were put down in every American city to protect the street railways and, in particular, to perpetuate the cross-subsidization of the street railways’ city-wide fare structures.”

Despite these common problems, Uber and other 21st-century jitney drivers do not face the major challenge that the drivers in Jitney do. They do not need to operate from a centralized location with a phone. Now that we all have phones in our pockets, the Uber “garage” is everywhere. It can’t be boarded up.

ABOUT SARAH SKWIRE

 Sarah Skwire is a fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.