Posts

AMC’s “Halt and Catch Fire” Is Capitalism’s Finest Hour by Keith Farrell

AMC’s Halt and Catch Fire is a brilliant achievement. The show is a vibrant look at the emerging personal computer industry in the early 1980s. But more than that, the show is about capitalism, creative destruction, and innovation.

While we all know the PC industry changed the world, the visionaries and creators who brought us into the information age faced uncertainty over what their efforts would yield. They risked everything to build new machines and to create shaky start-ups. Often they failed and lost all they had.

HCF has four main characters: Joe, a visionary and salesman; Cameron, an eccentric programming geek; Gordon, a misunderstood engineering genius; and Gordon’s wife, Donna, a brilliant but unappreciated housewife and engineer.

The show pits programmers, hardware engineers, investors, big businesses, corporate lawyers, venture capitalists, and competing start-ups against each other and, at times, shows them having to cooperate to overcome mutual problems. The result is innovation.

Lee Pace gives an award-worthy performance as Joe MacMillan. The son of a never-present IBM tycoon and a negligent, drug addicted mother, Joe struggles with a host of mental and emotional problems. He’s a man with a brilliant mind and an amazing vision — but he has no computer knowledge or capabilities.

The series begins with his leaving a sales job at IBM in the hope of hijacking Cardiff Electric, a small Texas-based computer company, and launching it into the personal computing game.

As part of his scheme, he gets a low-level job at Cardiff where he recruits Gordon Clark, played by the equally talented Scoot McNairy. Enamored with Gordon’s prior writings on the potential for widespread personal computer use, Joe pleads with Gordon to reverse engineer an IBM-PC with him. The plot delves into the ethical ambiguities of intellectual property law as the two spend days reverse engineering the IBM BIOS.

While the show is fiction, it is inspired in part by the real-life events of Rod Canion, co-founder of Compaq. His book, Open: How Compaq Ended IBM’s PC Domination and Helped Invent Modern Computing serves as a basis for many of the events in the show’s first season.

In 1981, when Canion and his cohorts set out to make a portable PC, the market was dominated by IBM. Because IBM had rushed their IBM-PC to market, the system was made up entirely of off-the-shelf components and other companies’ software.

As a result, it was possible to buy those same components and software and build what was known as an IBM “clone.” But these clones were only mostlycompatible with IBM. While they could run DOS, they may or may not have run other programs written for IBM-PCs.

Because IBM dominated the market, all the best software was being written for IBMs. Canion wanted to build a computer that was 100 percent IBM compatible but cheaper — and portable enough to move from desk to desk.

Canion said in an interview on the Internet History Podcast, “We didn’t want to copy their computer! We wanted to have access to the software that was written for their computer by other people.”

But in order to do that, he and his team had to reverse-engineer the IBM BIOS. They couldn’t just steal or copy the code because it was proprietary technology, but they could figure out what function the code executed and then write their own code to handle the same task.

Canion explains:

What our lawyers told us was that not only can you not use [the copyrighted code], anybody that’s even looked at it — glanced at it — could taint the whole project. … We had two software people. One guy read the code and generated the functional specifications.

So it was like reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function.

In Halt and Catch Fire, Joe uses this idea to push Cardiff into making their own PC by intentionally leaking to IBM that he and Gordon had indeed reversed engineered the BIOS. They recruit a young punk-rock programmer named Cameron Howe to write their own BIOS.

While Gordon, Cameron, and Joe all believe that they are the central piece of the plan, the truth is that they all need each other. They also need to get the bosses and investors at Cardiff on their side in order to succeed, which is hard to do after infuriating them. The show demonstrates that for an enterprise to succeed you need to have cooperation between people of varying skill sets and knowledge bases — and between capital and labor.

The series is an exploration of the chaos and creative destruction that goes into the process of innovation. The beginning of the first episode explains the show’s title:

HALT AND CATCH FIRE (HCF): An early computer command that sent the machine into a race condition, forcing all instructions to compete for superiority at once. Control of the computer could be regained.

The show takes this theme of racing for superiority to several levels: the characters, the industry, and finally the economy and the world as a whole.

As Gordon himself declares of the cut-throat environment in which computer innovation occurs, “It’s capitalism at its finest!” HFC depicts Randian heroes: businessmen, entrepreneurs, and creators fight against all odds in a race to change the world.

Now into its second season, the show is exploring the beginnings of the internet, and Cameron is running her own start-up company, Mutiny. I could go on about the outstanding production quality, but the real novelty here is a show where capitalists, entrepreneurs, and titans of industry are regarded as heroic.

Halt and Catch Fire is a brilliant show, but it isn’t wildly popular. I fear it may soon be canceled, so be sure to check it out while it’s still around.


Keith Farrell

Keith Farrell is a freelance writer and political commentator.

Socialism Is War and War Is Socialism by Steven Horwitz

“[Economic] planning does not accidentally deteriorate into the militarization of the economy; it is the militarization of the economy.… When the story of the Left is seen in this light, the idea of economic planning begins to appear not only accidentally but inherently reactionary. The theory of planning was, from its inception, modeled after feudal and militaristic organizations. Elements of the Left tried to transform it into a radical program, to fit into a progressive revolutionary vision. But it doesn’t fit. Attempts to implement this theory invariably reveal its true nature. The practice of planning is nothing but the militarization of the economy.” — Don Lavoie, National Economic Planning: What Is Left?

Libertarians have long confounded our liberal and conservative friends by being both strongly in favor of free markets and strongly opposed to militarism and foreign intervention. In the conventional world of “right” and “left,” this combination makes no sense. Libertarians are often quick to point out the ways in which free trade, both within and across national borders, creates cooperative interdependencies among those who trade, thereby reducing the likelihood of war. The long classical liberal tradition is full of those who saw the connection between free trade and peace.

But there’s another side to the story, which is that socialism and economic planning have a long and close connection with war and militarization.

As Don Lavoie argues at length in his wonderful and underappreciated 1985 book National Economic Planning: What Is Left?, any attempt to substitute economic planning (whether comprehensive and central or piecemeal and decentralized) for markets inevitably ends up militarizing and regimenting the society. Lavoie points out that this outcome was not an accident. Much of the literature defending economic planning worked from a militaristic model. The “success” of economic planning associated with World War I provided early 20th century planners with a specific historical model from which to operate.

This connection should not surprise those who understand the idea of the market as a spontaneous order. As good economists from Adam Smith to F.A. Hayek and beyond have appreciated, markets are the products of human action but not human design. No one can consciously direct an economy. In fact, Hayek in particular argued that this is true not just of the economy, but of society in general: advanced commercial societies are spontaneous orders along many dimensions.

Market economies have no purpose of their own, or as Hayek put it, they are “ends-independent.” Markets are simply means by which people come together to pursue the various ends that each person or group has. You and I don’t have to agree on which goals are more or less important in order to participate in the market.

The same is true of other spontaneous orders. Consider language. We can both use English to construct sentences even if we wish to communicate different, or contradictory, things with the language.

One implication of seeing the economy as a spontaneous order is that it lacks a “collective purpose.” There is no single scale of values that guides us as a whole, and there is no process by which resources, including human resources, can be marshaled toward those collective purposes.

The absence of such a collective purpose or common scale of values is one factor that explains the connection between war and socialism. They share a desire to remake the spontaneous order of society into an organization with a single scale of values, or a specific purpose. In a war, the overarching goal of defeating the enemy obliterates the ends-independence of the market and requires that hierarchical control be exercised in order to direct resources toward the collective purpose of winning the war.

In socialism, the same holds true. To substitute economic planning for the market is to reorganize the economy to have a single set of ends that guides the planners as they allocate resources. Rather than being connected with each other by a shared set of means, as in private property, contracts, and market exchange, planning connects people by a shared set of ends. Inevitably, this will lead to hierarchy and militarization, because those ends require trying to force people to behave in ways that contribute to the ends’ realization. And as Hayek noted in The Road to Serfdom, it will also lead to government using propaganda to convince the public to share a set of values associated with some ends. We see this tactic in both war and socialism.

As Hayek also pointed out, this is an atavistic desire. It is a way for us to try to recapture the world of our evolutionary past, where we existed in small, homogeneous groups in which hierarchical organization with a common purpose was possible. Deep in our moral instincts is a desire to have the solidarity of a common purpose and to organize resources in a way that enables us to achieve it.

Socialism and war appeal to so many because they tap into an evolved desire to be part of a social order that looks like an extended family: the clan or tribe. Soldiers are not called “bands of brothers” and socialists don’t speak of “a brotherhood of man” by accident. Both groups use the same metaphor because it works. We are susceptible to it because most of our history as human beings was in bands of kin that were largely organized in this way.

Our desire for solidarity is also why calls for central planning on a smaller scale have often tried to claim their cause as the moral equivalent of war. This is true on both the left and right. We have had the War on Poverty, the War on Drugs, and the War on Terror, among others. And we are “fighting,” “combating,” and otherwise at war with our supposedly changing climate — not to mention those thought to be responsible for that change. The war metaphor is the siren song of those who would substitute hierarchy and militarism for decentralized power and peaceful interaction.

Both socialism and war are reactionary, not progressive. They are longings for an evolutionary past long gone, and one in which humans lived lives that were far worse than those we live today. Truly progressive thinking recognizes the limits of humanity’s ability to consciously construct and control the social world. It is humble in seeing how social norms, rules, and institutions that we did not consciously construct enable us to coordinate the actions of billions of anonymous actors in ways that enable them to create incredible complexity, prosperity, and peace.

The right and left do not realize that they are both making the same error. Libertarians understand that the shared processes of spontaneous orders like language and the market can enable all of us to achieve many of our individual desires without any of us dictating those values for others. By contrast, the right and left share a desire to impose their own sets of values on all of us and thereby fashion the world in their own images.

No wonder they don’t understand us.


Steven Horwitz

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

How Ice Cream Won the Cold War by B.K. Marcus

Richard Nixon stood by a lemon-yellow refrigerator in Moscow and bragged to the Soviet leader: “The American system,” he told Nikita Khrushchev over frosted cupcakes and chocolate layer cake, “is designed to take advantage of new inventions.”

It was the opening day of the American National Exhibition at Sokol’niki Park, and Nixon was representing not just the US government but also the latest products from General Mills, Whirlpool, and General Electric. Assisting him in what would come to be known as the “Kitchen Debates” were attractive American spokesmodels who demonstrated for the Russian crowd the best that capitalism in 1959 had to offer.

Capitalist lifestyle

“This was the first time,” writes British food historian Bee Wilson of the summer exhibition, that “many Russians had encountered the American lifestyle firsthand: the first time they … set eyes on big American refrigerators.”

Laughing and sometimes jabbing fingers at one another, the two men debated the merits of capitalism and communism. Which country had the more advanced technologies? Which way of life was better? The conversation … hinged not on weapons or the space race but on washing machines and kitchen gadgets. (Consider the Fork)

Khrushchev was dismissive. Yes, the Americans had brought some fancy machines with them, but did all this consumer technology actually offer any real advantages?

In his memoirs, he later recalled picking up an automatic lemon squeezer. “What a silly thing … Mr. Nixon! … I think it would take a housewife longer to use this gadget than it would for her to … slice a piece of lemon, drop it into a glass of tea, then squeeze a few drops.”

Producing necessities

That same year, Khrushchev announced that the Soviet economy would overtake the United States in the production of milk, meat, and butter. These were products that made sense to him. He couldn’t deliver — although Soviet farmers were forced to slaughter their breeding herds in an attempt to do so — but the goal itself reveals what the communist leader believed a healthy economy was supposed to do: produce staples like meat and dairy, not luxuries like colorful kitchenware and complex gadgetry for the decadent and lazy.

“Don’t you have a machine,” he asked Nixon, “that puts food in the mouth and presses it down? Many things you’ve shown us are interesting but they are not needed in life. They have no useful purpose. They are merely gadgets.”

Khrushchev was displaying the behavior Ludwig von Mises described in The Anti-Capitalistic Mentality. “They castigate the luxury, the stupidity and the moral corruption of the exploiting classes,” Mises wrote of the socialists. “In their eyes everything that is bad and ridiculous is bourgeois, and everything that is good and sublime is proletarian.”

On display that summer in Moscow was American consumer tech at its most bourgeois. The problem with “castigating the luxury,” as Mises pointed out, is that all “innovation is first a luxury of only a few people, until by degrees it comes into the reach of the many.”

Producing luxuries

It is appropriate that the Kitchen Debate over luxury versus necessity took place among high-end American refrigerators. Refrigeration, as a luxury, is ancient. “There were ice harvests in China before the first millennium BC,” writes Wilson. “Snow was sold in Athens beginning in the fifth century BC. Aristocrats of the seventeenth century spooned desserts from ice bowls, drank wine chilled with snow, and even ate iced creams and water ices. Yet it was only in the nineteenth century in the United States that ice became an industrial commodity.” Only with modern capitalism, in other words, does the luxury reach so rapidly beyond a tiny elite.

“Capitalism,” Mises wrote in Economic Freedom and Interventionism, “is essentially mass production for the satisfaction of the wants of the masses.”

The man responsible for bringing ice to the overheated multitude was a Boston businessman named Frederic Tudor. “History now knows him as ‘the Ice King,’” Steven Johnson writes of Tudor in How We Got to Now: Six Innovations That Made the Modern World, “but for most of his early adulthood he was an abject failure, albeit one with remarkable tenacity.”

Like many wealthy families in northern climes, the Tudors stored blocks of frozen lake water in icehouses, two-hundred-pound ice cubes that would remain marvelously unmelted until the hot summer months arrived, and a new ritual began: chipping off slices from the blocks to freshen drinks [and] make ice cream.

In 1800, when Frederic was 17, he accompanied his ill older brother to Cuba. They were hoping the tropical climate would improve his brother’s health, but it “had the opposite effect: arriving in Havana, the Tudor brothers were quickly overwhelmed by the muggy weather.” They reversed course, but the summer heat chased them back to the American South, and Frederic longed for the cooler climes of New England. That experience “suggested a radical — some would say preposterous — idea to young Frederic Tudor: if he could somehow transport ice from the frozen north to the West Indies, there would be an immense market for it.”

“In a country where at some seasons of the year the heat is almost unsupportable,” Tudor wrote in his journal, “ice must be considered as outdoing most other luxuries.”

Tudor’s folly

Imagine what an early 19th-century version of Khrushchev would have said to the future Ice King. People throughout the world go hungry, and you, Mr. Tudor, want to introduce frozen desserts to the tropics? What of beef? What of butter? The capitalists chase profits rather than producing the necessities.

It’s true that Tudor was pursuing profits, but his idea of ice outdoing “most other luxuries” looked to his contemporaries more like chasing folly than fortune.

The Boston Gazette reported on one of his first shiploads of New England ice: “No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.”

And at first the skeptics seemed right. Tudor “did manage to make some ice cream,” Johnson tells us. And that impressed a few of the locals. “But the trip was ultimately a complete failure.” The novelty of imported ice was just too novel. Why supply ice where there was simply no demand?

You can’t put a price on failure

In the early 20th century, economists Ludwig von Mises and F.A. Hayek, after years of debate with the Marxists, finally began to convince advocates of socialist central planning that market prices were essential to the rational allocation of scarce resources. Some socialist theorists responded with the idea of using capitalist market prices as a starting point for the central planners, who could then simulate the process of bidding for goods, thereby replacing real markets with an imitation that they believed would be just as good. Capitalism would then be obsolete, an unfortunate stage in the development of greater social justice.

By 1959, Khrushchev could claim, however questionably, that Soviet refrigerators were just as good as the American variety — except for a few frivolous features. But there wouldn’t have been any Soviet fridges at all if America hadn’t led the way in artificial refrigeration, starting with Tudor’s folly a century and a half earlier. If the central planners had been around in 1806 when the Boston Gazette poked fun at Tudor’s slippery speculation, what prices would they have used as the starting point for future innovation? All the smart money was in other ventures, and Tudor was on his way to losing his family’s fortune and landing in debtor’s prison.

Only through stubborn persistence did Tudor refine his idea and continue to innovate while demand slowly grew for what he had to offer.

“Still pursued by his creditors,” Johnson writes, Tudor

began making regular shipments to a state-of-the-art icehouse he had built in Havana, where an appetite for ice cream had been slowly maturing. Fifteen years after his original hunch, Tudor’s ice trade had finally turned a profit. By the 1820s, he had icehouses packed with frozen New England water all over the American South. By the 1830s, his ships were sailing to Rio and Bombay. (India would ultimately prove to be his most lucrative market.)

The world the Ice King made

In the winter of 1846–47, Henry David Thoreau watched a crew of Tudor’s ice cutters at work on Walden Pond.

Thoreau wrote, “The sweltering inhabitants of Charleston and New Orleans, of Madras and Bombay and Calcutta, drink at my well.… The pure Walden water is mingled with the sacred water of the Ganges.”

When Tudor died in 1864, Johnson tells us, he “had amassed a fortune worth more than $200 million in today’s dollars.”

The Ice King had also changed the fortunes of all Americans, and reshaped the country in the process. Khrushchev would later care about butter and beef, but before refrigerated train cars — originally cooled by natural ice — it didn’t matter how much meat and dairy an area could produce if it could only be consumed locally without spoiling. And only with the advent of the home icebox could families keep such products fresh. Artificial refrigeration created the modern city by allowing distant farms to feed the growing urban populations.

A hundred years after the Boston Gazette reported what turned out to be Tudor’s failed speculation, the New York Times would run a very different headline: “Ice Up to 40 Cents and a Famine in Sight”:

Not in sixteen years has New York faced such an iceless prospect as this year. In 1890 there was a great deal of trouble and the whole country had to be scoured for ice. Since then, however, the needs for ice have grown vastly, and a famine is a much more serious matter now than it was then.

“In less than a century,” Johnson observes, “ice had gone from a curiosity to a luxury to a necessity.”

The world that luxury made

Before modern markets, Mises tells us, the delay between luxury and necessity could take centuries, but “from its beginnings, capitalism displayed the tendency to shorten this time lag and finally to eliminate it almost entirely. This is not a merely accidental feature of capitalistic production; it is inherent in its very nature.” That’s why everyone today carries a smartphone — and in a couple of years, almost every wrist will bear a smartwatch.

The Cold War is over, and Khrushchev is no longer around to scoff, but the Kitchen Debate continues as the most visible commercial innovations produce “mere gadgets.” Less visible is the steady progress in the necessities, including the innovations we didn’t know were necessary because we weren’t imagining the future they would bring about. Even less evident are all the failures. We talk of profits, but losses drive innovation forward, too.

It’s easy to admire the advances that so clearly improve lives: ever lower infant mortality, ever greater nutrition, fewer dying from deadly diseases. It’s harder to see that the larger system of innovation is built on the quest for comfort, for entertainment, for what often looks like decadence. But the long view reveals that an innovator’s immediate goals don’t matter as much as the system that promotes innovation in the first place.

Even if we give Khrushchev the benefit of the doubt and assume that he really did care about feeding the masses and satisfying the most basic human needs, it’s clear the Soviet premier had no idea how economic development works. Progress is not driven by producing ever more butter; it is driven by ice cream.


B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

“Paid Family Leave” Is a Great Way to Hurt Women by Robert P. Murphy

In an article in the New Republic, Lauren Sandler argues that it’s about time the United States join the ranks of all other industrialized nations and provide legally guaranteed paid leave for pregnancy or illness.

Her arguments are similar to ones employed in the minimum wage debate. Opponents say that making particular workers more expensive will lead employers (on aggregate) to hire fewer of them. Supporters reject this tack as fearmongering, going so far as to claim such measures will boost profitability, and that only callous disregard for the disadvantaged can explain the opposition.

If paid leave (or higher pay for unskilled workers) helps workers and employers, then why do progressives need government power to force these great ideas on everyone?

The United States already has unpaid family leave, with the Family and Medical Leave Act (FMLA) signed into law by President Clinton in 1993. This legislation “entitles eligible employees … to take unpaid, job-protected leave for specified family and medical reasons with continuation of group health insurance coverage under the same terms and conditions as if the employee had not taken leave.” Specifically, the FMLA grants covered employees 12 workweeks of such protection in a 12-month period, to deal with a pregnancy, personal sickness, or the care of an immediate family member. (There is a provision for 26 workweeks if the injured family member is in the military.)

But “workers’ rights” advocates want to move beyond the FMLA, in winning legally guaranteed paid leave for such absences. Currently, California, New Jersey, and Rhode Island have such policies.

The basic libertarian argument against such legislation is simple enough: no worker has a right to any particular job, just as no employer has the right to compel a person to work for him or her. In a genuine market economy based on private property and consensual relations, employers and workers are legally treated as responsible adults to work out mutually beneficial arrangements. If it’s important to many women workers that they won’t forfeit their jobs in the event of a pregnancy, then in a free and wealthy society, many firms will provide such clauses in the employment contract in order to attract qualified applicants.

For example, if a 23-year-old woman with a fresh MBA is applying to several firms for a career in the financial sector, but she has a serious boyfriend and thinks they might one day start a family, then — other things equal — she is going to highly value a clause in the employment contract that guarantees she won’t lose her job if she takes off time to have a baby. Since female employment in the traditional workforce is now so prevalent, we can expect many employers to have such provisions in in their employment contracts in order to attract qualified applicants. Women don’t have a right to such clauses, just as male hedge-fund VPs don’t have a right to year-end bonuses, but it’s standard for employment contracts to have such features.

Leaving aside philosophical and ethical considerations, let’s consider basic economics and the consequences of pregnancy- and illness-leave legislation. It is undeniable that providing even unpaid, let alone paid, leave is a constraint on employers. Other things equal, an employer does not want an employee to suddenly not show up for work for months at a time, and then expect to come back as if nothing had happened. The employer has to scramble to deal with the absence in the meantime, and furthermore doesn’t want to pour too much training into a temporary employee because the original one is legally guaranteed her (or his) old job. If the employer also has to pay out thousands of dollars to an employee who is not showing up for work, it is obviously an extra burden.

As always with such topics, the easiest way to see the trade-off is to exaggerate the proposed measure. Suppose instead of merely guaranteeing a few months of paid maternity leave, instead the state enforced a rule that said, “Any female employee who becomes pregnant can take off up to 15 years, earning half of her salary, in order to deliver and homeschool the new child.” If that were the rule, then young female employees would be ticking time bombs, and potential employers would come up with all sorts of tricks to deny hiring them or to pay them very low salaries compared to their ostensible on-the-job productivity.

Now, just because guaranteed leave, whether paid or unpaid, is an expensive constraint for employers, that doesn’t mean such policies (in moderation) are necessarily bad business practices, so long as they are adopted voluntarily. To repeat, it is entirely possible that in a genuinely free market economy, many employers would voluntarily provide such policies in order to attract the most productive workers. After all, employers allow their employees to take bathroom breaks, eat lunch, and go on vacation, even though the employees aren’t generating revenue for the firm when doing so.

However, if the state must force employers to enact such policies, then we can be pretty sure they don’t make economic sense for the firms in question. In her article, Sandler addresses this fear by writing, in reference to New Jersey’s paid leave legislation,

After then-Governor Jon Corzine signed the bill, Chris Christie promised to overturn it during his campaign against Corzine. But Christie never followed through. The reason why is quite plain: As with California, most everyone loves paid leave. A recent study from the CEPR found that businesses, many of which strenuously opposed the policy, now believe paid leave has improved productivity and employee retention, decreasing turnover costs. (emphasis added)

Well, that’s fantastic! Rather than engaging in divisive political battles, why doesn’t Sandler simply email that CEPR (Center for Economic and Policy Research) study to every employer in the 47 states that currently lack paid leave legislation? Once they see that they are flushing money down the toilet right now with high turnover costs, they will join the ranks of the truly civilized nations and offer paid leave.

The quotation from Sandler is quite telling. Certain arguments for progressive legislation rely on “externalities,” where the profit-and-loss incentives facing individual consumers or firms do not yield the “socially optimal” behavior. On this issue of family leave, the progressive argument is much weaker. Sandler and other supporters must maintain that they know better than the owners of thousands of firms how to structure their employment contracts in order to boost productivity and employee retention. What are the chances of that?

In reality, given our current level of wealth and the configuration of our labor force, it makes sense for some firms to have generous “family leave” clauses for some employees, but it is not necessarily a sensible approach in all cases. The way a free society deals with such nuanced situations is to allow employers and employees to reach mutually beneficial agreements. If the state mandates an approach that makes employment more generous to women in certain dimensions — since they are the prime beneficiaries of pregnancy leave, even if men can ostensibly use it, too — then we can expect employers to reduce the attractiveness of employment contracts offered to women in other dimensions. There is no such thing as a free lunch. Mandating paid leave will reduce hiring opportunities and base pay, especially for women. If this trade-off is something the vast majority of employees want, then that’s the outcome a free labor market would have provided without a state mandate.


Robert P. Murphy

Robert P. Murphy is senior economist with the Institute for Energy Research. He is author of Choice: Cooperation, Enterprise, and Human Action (Independent Institute, 2015).

Labor Unions Create Unemployment: It’s a Feature, Not a Bug by Sarah Skwire

Did the labor unions goof, or did they get exactly what they want?

Los Angeles has approved a minimum wage hike to $15 an hour. Some of the biggest supporters of that increase were the labor unions. But now that the increase has been approved, the unions are fighting to exempt union labor from that wage hike.

Over at Anything Peaceful, Dan Bier has nicely explained why the unions would do something that seems, at first glance, so nonsensical. But what I want to point out is that this kind of hijinks is not a new invention of 21st century organized labor. Instead, it’s pretty much what labor was organized to do. It’s a feature, not a bug.

Part of the early reasoning for the minimum wage — which originated as a “family wage” or “living wage” — was its intent to allow a worker to “keep his wife and children out of competition with himself” and presumably to keep all other women out of the workforce as well.

Similarly, the labor movement, from the very beginning, meant to protect organized white male labor from competition against black labor, immigrant labor, female labor, and nonunion labor. There are subtleties to this generalization, of course, and labor historian Ruth Milkman identifies four historical waves of the labor movement that have differing commitments (and a lack thereof) to a more diverse vision of labor rights. But unions — like so many other institutions — work on the “get up and bar the door” principle. Get up as high as you can, and then bar the door behind you against any further entrants who might cut into the goodies you have grabbed for yourself.

Labor union expert Charles Baird notes,

Unions depend on capture. They try to capture employers by cutting them off from alternative sources of labor; they try to capture workers by eliminating union-free employment alternatives; and they try to capture customers by eliminating union-free producers. Successful capture generates monopoly gains for unions.

Protection is the name of the game.

Unsurprisingly, the unions made sure to be involved when, about 50 years before the 1970s push for an equal rights amendment, there was another push for an ERA in the United States. Written by suffragist leader Alice Paul, the amendment was an attempt to leverage the newly recognized voting power of women into a policy that guaranteed men and women shall have equal rights throughout the United States and every place under its jurisdiction.” This amendment would have prevented various gender-based inequities that the courts supported at the time — like hugely different hourly wages for male and female workers, limits on the number of hours women could work, limits on when women could work (night shifts were seen as particularly dangerous for women’s health and welfare), and limits on the kinds of work women could do.

Reporting on the debates over the ERA in 1924, Doris Stevens noted three main objections to the amendment:

First, there was the familiar plea for gradual, rather than sweeping change.

Second, there were concerns over lost pensions for widows and mothers.

And in Stevens’s words,

The final objection says: Grant political, social, and civil equality to women, but do not give equality to women in industry.… Here lies the heart of the whole controversy. It is not astonishing, but very intelligent indeed, that the battle should center on the point of woman’s right to sell her labor on the same terms as man. For unless she is able equally to compete, to earn, to control, and to invest her money, unless in short woman’s economic position is made more secure, certainly she cannot establish equality in fact. She will have won merely the shadow of power without essential and authentic substance.

Suffragist Rheta Childe Dorr (in Good Housekeeping, of all places. How the mighty have fallen!) pointed out again the logic behind labor’s opposition to the equal rights amendment:

The labor unions are most opposed to this law, for few unions want women to advance in skilled trades. The Women’s Trade Union League, controlled and to a large extent supported by the men’s unions, opposes it. Of course, the welfare organizations oppose it, for it frees women wage earners from the police power of the old laws. But I pray that public opinion, especially that of the club women, will support it. It’s the first law yet proposed that gives working women a man’s chance industrially. “No men’s labor unions, no leisure class women, no uniformed legislators have a right to govern our lives without our consent,” the women declare, and I think they are dead right about it.

Organized labor — founded to ensure the collective right to contract — refused to stand up for the right of individual women to contract. From their point of view, it was only sensible. And, perhaps most importantly, women in organized labor refused to stand up for the women outside the unions.

Organized male and female labor’s fight against the ERA was at least as much about protectionism as it was about sexism. Maybe more. Women’s rights and union activist Ethel M. Smith attended the debates on the ERA to report on it for the Life and Labor Bulletin, and found that union workers did not even attempt to gloss over their protectionist agenda:

Miss Mary Goff of the International Ladies’ Garment Workers Union, emphasized the seriousness of the effect upon organized establishments were legal restrictions upon hours of labor removed from the unorganized. “The organized women workers,” she said, “need the labor laws to protect them from the competition of the unorganized. Where my union, for instance, may have secured for me a 44-hour week, how long could they maintain it if there were unlimited hours for other workers? Unfortunately, there are hundreds of thousands of unorganized working women in New York who would undoubtedly be working 10 hours a day but for the 9-hour law of New York.”

So labor unions excluded women as long as they could, then let in a privileged few and barred the doors behind them. And they continue to use the same tactics today in LA and elsewhere.

How long can they keep it up?


Sarah Skwire

Sarah Skwire is a senior fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.

Who Should Choose? Patients and Doctors or the FDA? by Doug Bandow

Good ideas in Congress rarely have a chance. Rep. Fred Upton (R-Mich.) is sponsoring legislation to speed drug approvals, but his initial plan was largely gutted before he introduced it last month.

Congress created the Food and Drug Administration in 1906, long before prescription drugs became such an important medical treatment. The agency became an omnibus regulatory agency, controlling everything from food to cosmetics to vitamins to pharmaceuticals. Birth defects caused by the drug Thalidomide led to the 1962 Kefauver-Harris Amendments which vastly expanded the FDA’s powers. The new controls did little to improve patient safety but dramatically slowed pharmaceutical approvals.

Those who benefit the most from drugs often complain about the cost since pills aren’t expensive to make. However, drug discovery is an uncertain process. Companies consider between 5,000 and 10,000 substances for every one that ends up in the pharmacy. Of those only one-fifth actually makes money—and must pay for the entire development, testing, and marketing processes.

As a result, the average per drug cost exceeds $1 billion, most often thought to be between $1.2 and $1.5 billion. Some estimates run more.

Naturally, the FDA insists that its expensive regulations are worth it. While the agency undoubtedly prevents some bad pharmaceuticals from getting to market, it delays or blocks far more good products.

Unfortunately, the political process encourages the agency to kill with kindness. Let a drug through which causes the slightest problem, and you can expect television special reports, awful newspaper headlines, and congressional hearings. Stop a good drug and virtually no one notices.

It took the onset of AIDS, then a death sentence, to force the FDA to speed up its glacial approval process. No one has generated equivalent pressure since. Admitted Richard Merrill, the agency’s former chief counsel:  “No FDA official has ever been publicly criticized for refusing to allow the marketing of a drug.”

By 1967 the average delay in winning approval of a new drug had risen from seven to 30 months after the passage of Kefauver-Harris. Approval time now is estimated to run as much as 20 years.

While economist Sam Peltzman figured that the number of new drugs approved dropped in half after Kefauver-Harris, there was no equivalent fall in the introduction of ineffective or unsafe pharmaceuticals. All the Congress managed to do was strain out potentially life-saving products.

After all, a company won’t make money selling a medicine that doesn’t work. And putting out something dangerous is a fiscal disaster. Observed Peltzman:  the “penalties imposed by the marketplace on sellers of ineffective drugs prior to 1962 seem to have been enough of a deterrent to have left little room for improvement by a regulatory agency.”

Alas, the FDA increases the cost of all medicines, delays the introduction of most pharmaceuticals, and prevents some from reaching the market. That means patients suffer and even die needlessly.

The bureaucracy’s unduly restrictive approach plays out in other bizarre ways. Once a drug is approved doctors may prescribe it for any purpose, but companies often refuse to go through the entire process again to win official okay for another use. Thus, it is common for AIDS, cancer, and pediatric patients to receive off-label prescriptions. However, companies cannot advertise these safe, effective, beneficial uses.

Congress has applied a few bandages over the years. One was to create a process of user fees through the Prescription Drug User Fee Act. Four economists, Tomas Philipson, Ernst Berndt, Adrian Gottschalk, and Matthew Strobeck, figured that drugmakers gained between $11 billion and $13 billion and consumers between $5 billion and $19 billion. Total life years saved ranged between 180,000 and 310,000. But lives continue to be lost because the approval process has not been accelerated further.

Criticism and pressure did lead to creation of a special FDA procedure for “Accelerated Approval” of drugs aimed at life-threatening conditions. This change, too, remains inadequate. Nature Biotechnology noted that few medicines qualified and “in recent years, FDA has been ratcheting up the requirements.”

The gravely ill seek “compassionate access” to experimental drugs. Some patients head overseas unapproved treatments are available. The Wall Street Journal reported on those suffering from Lou Gehrig’s disease who, “frustrated by the slow pace of clinical drug trials or unable to qualify, are trying to brew their own version of an experimental compound at home and testing it on themselves.”

Overall, far more people die from no drugs than from bad drugs. Most pharmaceutical problems involve doctors misprescribing or patients misusing medicines. The deadliest pre-1962 episode involved Elixir Sulfanilamide and killed 107 people. (Thalidomide caused some 10,000 birth defects, but no deaths.) Around 3500 users died from Isoproterenol, an asthmatic inhaler. Vioxx was blamed for a similar number of deaths, though the claim was disputed. Most of the more recent incidents would not have been prevented from a stricter approval process.

The death toll from agency delays is much greater. Drug analyst Dale Gieringer explained:  “The benefits of FDA regulation relative to that in foreign countries could reasonably be put at some 5,000 casualties per decade or 10,000 per decade for worst-case scenarios.  In comparison … the cost of FDA delay can be estimated at anywhere from 21,000 to 120,000 lives per decade.”

According to the Competitive Enterprise Institute, among the important medicines delayed were ancrod, beta-blockers, citicoline, ethyol, femara, glucophage, interleukin-2, navelbine, lamictal, omnicath, panorex, photofrin, prostar, rilutek, taxotere, transform, and vasoseal.

Fundamental reform is necessary. The FDA should be limited to assessing safety, with the judgment as to efficacy left to the marketplace. Moreover, the agency should be stripped of its approval monopoly. As a start drugs approved by other industrialized states should be available in America.

The FDA’s opinion also should be made advisory. Patients and their health care providers could look to private certification organizations, which today are involved in everything from building codes to electrical products to kosher food. Medical organizations already maintain pharmaceutical databases and set standards for treatments with drugs. They could move into drug testing and assessment.

No doubt, some people would make mistakes. But they do so today. With more options more people’s needs would be better met. Often there is no single correct treatment decision. Ultimately the patient’s preference should control.

Congress is arguing over regulatory minutiae when it should be debating the much more basic question: Who should decide who gets treated how? Today the answer is Uncle Sam. Tomorrow the answer should be all of us.

Doug Bandow

Doug Bandow is a senior fellow at the Cato Institute and the author of a number of books on economics and politics. He writes regularly on military non-interventionism.

Capitalism Defused the Population Bomb by Chelsea German

Journalists know that alarmism attracts readers. An article in the British newspaper the Independent titled, “Have we reached ‘peak food’? Shortages loom as global production rates slow” claimed humanity will soon face mass starvation.

Just as Paul Ehrlich’s 1968 bestseller The Population Bomb  predicted that millions would die due to food shortages in the 1970s and 1980s, the article in 2015 tries to capture readers’ interest through unfounded fear. Let’s take a look at the actual state of global food production.

The alarmists cite statistics showing that while we continue to produce more and more food every year, the rate of acceleration is slowing down slightly. The article then presumes that if the rate of food production growth slows, then widespread starvation is inevitable.

This is misleading. Let us take a look at the global trend in net food production, per person, measured in 2004-2006 international dollars. Here you can see that even taking population growth into account, food production per person is actually increasing:

Food is becoming cheaper, too. As K.O. Fuglie and S. L. Wang showed in their 2012 article “New Evidence Points to Robust but Uneven Productivity Growth in Global Agriculture,” food prices have been declining for over a century, in spite of a recent uptick:

In fact, people are better nourished today than they ever have been, even in poor countries. Consider how caloric consumption in India increased despite population growth:

Given that food is more plentiful than ever, what perpetuates the mistaken idea that mass hunger is looming? The failure to realize that human innovation, through advancing technology and the free market, will continue to rise to meet the challenges of growing food demand.

In the words of HumanProgress.org Advisory Board member Matt Ridley, “If 6.7 billion people continue to keep specializing and exchanging and innovating, there’s no reason at all why we can’t overcome whatever problems face us.”

This idea first appeared at Cato.org.

Health Insurance Is Illegal by Warren C. Gibson

Health insurance is a crime. No, I’m not using a metaphor. I’m not saying it’s a mess, though it certainly is that. I’m saying it’s illegal to offer real health insurance in America. To see why, we need to understand what real insurance is and differentiate that from what we currently have.

Real insurance

Life is risky. When we pool our risks with others through insurance policies, we reduce the financial impact of unforeseen accidents or illness or premature death in return for a premium we willingly pay. I don’t regret the money I’ve spent on auto insurance during my first 55 years of driving, even though I’ve yet to file a claim.

Insurance originated among affinity groups such as churches or labor unions, but now most insurance is provided by large firms with economies of scale, some organized for profit and some not. Through trial and error, these companies have learned to reduce the problems of adverse selection and moral hazard to manageable levels.

A key word above is unforeseen.

If some circumstance is known, it’s not a risk and therefore cannot be the subject of genuine risk-pooling insurance. That’s why, prior to Obamacare, some insurance companies insisted that applicants share information about their physical condition. Those with preexisting conditions were turned down, invited to high-risk pools, or offered policies with higher premiums and higher deductibles.

Insurers are now forbidden to reject applicants due to preexisting conditions or to charge them higher rates.

They are also forbidden from charging different rates due to different health conditions — and from offering plans that exclude certain coverage items, many of which are not “unforeseen.”

In other words, it’s illegal to offer real health insurance.

Word games

Is all this just semantics? Not at all. What currently passes for health insurance in America is really just prepaid health care — on a kind of all-you-can-consume buffet card. The system is a series of cost-shifting schemes stitched together by various special interests. There is no price transparency. The resulting overconsumption makes premiums skyrocket, and health resources get misallocated relative to genuine wants and needs.

Lessons

Some lessons here are that genuine health insurance would offer enormous cost savings to ordinary people — and genuine benefits to policyholders. These plans would encourage thrift and consumer wisdom in health care planning,  while discouraging the overconsumption that makes prepaid health care unaffordable.

At this point, critics will object that private health insurance is a market failure because the refusal of unregulated private companies to insure preexisting conditions is a serious problem that can only be remedied by government coercion. The trouble with such claims is that no one knows what a real health insurance market would generate, particularly as the pre-Obamacare regime wasn’t anything close to being free.

What might a real, free-market health plan look like?

  • People would be able to buy less expensive plans from anywhere, particularly across state lines.
  • People would be able to buy catastrophic plans (real insurance) and set aside much more in tax-deferred medical savings accounts to use on out-of-pocket care.
  • People would very likely be able to buy noncancelable, portable policies to cover all unforeseen illnesses over the policyholder’s lifetime.
  • People would be able to leave costly coverage items off their policies — such as chiropractic or mental health — so that they could enjoy more affordable premiums.
  • People would not be encouraged by the tax code to get insurance through their employer.

What about babies born with serious conditions? Parents could buy policies to cover such problems prior to conception. What about parents whose genes predispose them to produce disabled offspring? They might have to pay more.

Of course, there will always be those who cannot or do not, for one reason or another, take such precautions. There is still a huge reservoir of charitable impulses and institutions in this country that could offer assistance. And these civil society organizations would be far more robust in a freer health care market.

The enemy of the good

Are these perfect solutions? By no means. Perfection is not possible, but market solutions compare very favorably to government solutions, especially over longer periods. Obamacare will continue to bring us unaccountable bureaucracies, shortages, rationing, discouraged doctors, and more.

Some imagine that prior to Obamacare, we had a free-market health insurance system, but the system was already severely hobbled by restrictions.

To name a few:

  • It was illegal to offer policies across state lines, which suppressed choices and increased prices, essentially cartelizing health insurance by state.
  • Employers were (and still are) given a tax break for providing health insurance (but not auto insurance) to their employees, reducing the incentive for covered employees to economize on health care while driving up prices for individual buyers. People stayed locked in jobs out of fear of losing health policies.
  • State regulators forbade policies that excluded certain coverage items, even if policyholders were amenable to such plans.
  • Many states made it illegal to price discriminate based on health status.
  • The law forbade associated health plans, which would allow organizations like churches or civic groups to pool risk and offer alternatives.
  • Medicaid and Medicare made up half of the health care system.

Of course, Obamacare fixed none of these problems.

Many voices are calling for the repeal of Obamacare, but few of those voices are offering the only solution that will work in the long term: complete separation of state and health care. That means no insurance regulation, no medical licensing, and ultimately, the abolition of Medicare and Medicaid, which threaten to wash future federal budgets in a sea of red ink.

Meanwhile, anything resembling real health insurance is illegal. And if you tried to offer it, they might throw you in jail.

Warren C. Gibson

Warren Gibson teaches engineering at Santa Clara University and economics at San Jose State University.

Paul Krugman: Three Wrongs Don’t Make a Right by ROBERT P. MURPHY

One of the running themes throughout Paul Krugman’s public commentary since 2009 is that his Keynesian model — specifically, the old IS-LM framework — has done “spectacularly well” in predicting the major trends in the economy. Krugman actually claimed at one point that he and his allies had been “right about everything.” In contrast, Krugman claims, his opponents have been “wrong about everything.”

As I’ll show, Krugman’s macro predictions have been wrong in three key areas. So, by his own criterion of academic truth, Krugman’s framework has been a failure, and he should consider it a shame that people still seek out his opinion.

Modeling interest rates: the zero lower bound

Krugman’s entire case for fiscal stimulus rests on the premise that central banks can get stuck in a “liquidity trap” when interest rates hit the “zero lower bound” (ZLB). As long as nominal interest rates are positive, Krugman argued, the central bank could always stimulate more spending by loosening monetary policy and cutting rates further. These actions would boost aggregate demand and help restore full employment. In such a situation, there was no case for Keynesian deficit spending as a means to create jobs.

However, Krugman said that this conventional monetary policy lost traction early in the Great Recession once nominal short-term rates hit (basically) 0 percent. At that point, central banks couldn’t stimulate demand through open-market operations, and thus the government had to step in with a large fiscal stimulus in the form of huge budget deficits.

As is par for the course, Krugman didn’t express his views in a tone of civility or with humility. No, Krugman wrote things like this in response to Gary Becker:

Urp. Gack. Glug. If even Nobel laureates misunderstand the issue this badly, what hope is there for the general public? It’s not about the size of the multiplier; it’s about the zero lower bound….

And the reason we’re all turning to fiscal policy is that the standard rule, which is that monetary policy plus automatic stabilizers should do the work of smoothing the business cycle, can’t be applied when we’re hard up against the zero lower bound.

I really don’t know why this is so hard to understand. (emphasis added)

But then, in 2015, things changed: various bonds in Europe began exhibiting negative nominal yields. Here’s how liberal writer Matt Yglesias — no right-wing ideologue — described this development in late February:

Indeed, the interest rate situation in Europe is so strange that until quite recently, it was thought to be entirely impossible. There was a lot of economic theory built around the problem of the Zero Lower Bound — the impossibility of sustained negative interest rates…. Paul Krugman wrote a lot of columns about it. One of them said “the zero lower bound isn’t a theory, it’s a fact, and it’s a fact that we’ve been facing for five years now.”

And yet it seems the impossible has happened. (emphasis added)

Now this is quite astonishing, the macroeconomic analog of physicists accelerating particles beyond the speed of light. If it turns out that the central banks of the world had more “ammunition” in terms of conventional monetary policy, then even on its own terms, the case for Keynesian fiscal stimulus becomes weaker.

So what happened with this revelation? Once he realized he had been wrong to declare so confidently that 0 percent was a lower bound on rates, did Krugman come out and profusely apologize for putting so much of his efforts into pushing fiscal stimulus rather than further rate cuts, since the former were a harder sell politically?

Of course not. This is how Krugman first dealt with the subject in early March when it became apparent that the “ZLB” was a misnomer:

We now know that interest rates can, in fact, go negative; those of us who dismissed the possibility by saying that people could simply hold currency were clearly too casual about it. But how low?

Then, after running through other people’s estimates, Krugman wrapped up his post by saying, “And I am pinching myself at the realization that this seemingly whimsical and arcane discussion is turning out to have real policy significance.”

Isn’t that cute? The foundation for the Keynesian case for fiscal stimulus rests on an assumption that interest rates can’t go negative. Then they do go negative, and Krugman is pinching himself that he gets to live in such exciting times. I wonder, is that the reaction Krugman wanted from conservative economists when interest rates failed to spike despite massive deficits — namely, that they would just pinch themselves to see that their wrong statements about interest rates were actually relevant to policy?

I realize some readers may think I’m nitpicking here, because (thus far) it seems that maybe central banks can push interest rates only 50 basis points or so beneath the zero bound. Yet, in practice, that result would still be quite significant, if we are operating in the Keynesian framework. It’s hard to come up with a precise estimate, but using the Taylor Principle in reverse, and then invoking Okun’s Law, a typical Keynesian might agree that the Fed pushing rates down to –0.5 percent, rather than stopping at 0 percent, would have reduced unemployment during the height of the recession by 0.5 percentage points.

That might not sound like a lot, but it corresponds to about 780,000 workers. For some perspective, in February 2013, Krugman estimated that the budget sequester would cost about 700,000 jobs, and classified it as a “fiscal doomsday machine” and “one of the worst policy ideas in our nation’s history.” So if my estimate is in the right ballpark, then on his own terms, Krugman should admit that his blunder — in thinking the Fed couldn’t push nominal interest rates below 0 percent — is one of the worst mistakes by an economist in US history. If he believes his own model and rhetoric, Krugman should be doing a lot more than pinching himself.

Modeling growth: fiscal stimulus and budget austerity

Talk of the so-called “sequester” leads into the next sorry episode in Krugman’s track record: he totally botched his forecasts of US economic growth (and employment) after the turn to (relative) US fiscal restraint. Specifically, in April 2013, Krugman threw down the gauntlet, arguing that we were being treated to a test between the Keynesian emphasis on fiscal policy and the market monetarist emphasis on monetary policy. Guys like Mercatus Center monetary economist Scott Sumner had been arguing that the Fed could offset Congress’s spending cuts, while Krugman — since he was still locked into the “zero lower bound” and “liquidity trap” mentality — said that this was wishful thinking. That’s why Krugman had labeled the sequester a “fiscal doomsday machine,” after all.

As it turned out, the rest of 2013 delivered much better economic news than Krugman had been expecting. Naturally, the market monetarists were running victory laps by the end of the year. Then, in a move that would embarrass anybody else, in January 2014 Krugman had the audacity to wag his finger at Sumner for thinking that the previous year’s economy was somehow a test of Keynesian fiscal stimulus versus market monetarist monetary stimulus. Yes, you read that right: back in April 2013 when the economy was doing poorly, Krugman said 2013 would be a good test of the two viewpoints. Then, when he failed the test he himself had set up, Krugman complained that it obviously wasn’t a fair test, because all sorts of other things can occur to offset the theoretical impacts. (I found the episode so inspiring that I wrote a play about it.)

Things became even more comical by the end of 2014, when it was clear that the US economy — at least according to conventional metrics like the official unemployment rate and GDP growth — was doing much better than Krugman’s doomsday rhetoric would have anticipated. At this point, rather than acknowledging how wrong his warnings about US “austerity” had been, Krugman inconceivably tried to claim victory — by arguing that all of the conservative Republican warnings about Obamacare had been wrong.

This rhetorical move was so shameless that not just anti-Keynesians like Sumner but even progressives had to cry foul. Specifically, Jeffrey Sachs wrote a scathing article showcasing Krugman’s revisionism:

For several years…Paul Krugman has delivered one main message to his loyal readers: deficit-cutting “austerians” (as he calls advocates of fiscal austerity) are deluded. Fiscal retrenchment amid weak private demand would lead to chronically high unemployment. Indeed, deficit cuts would court a reprise of 1937, when Franklin D. Roosevelt prematurely reduced the New Deal stimulus and thereby threw the United States back into recession.

Well, Congress and the White House did indeed play the austerian card from mid-2011 onward. The federal budget deficit has declined from 8.4% of GDP in 2011 to a predicted 2.9% of GDP for all of 2014.…

Krugman has vigorously protested that deficit reduction has prolonged and even intensified what he repeatedly calls a “depression” (or sometimes a “low-grade depression”). Only fools like the United Kingdom’s leaders (who reminded him of the Three Stooges) could believe otherwise.

Yet, rather than a new recession, or an ongoing depression, the US unemployment rate has fallen from 8.6% in November 2011 to 5.8% in November 2014. Real economic growth in 2011 stood at 1.6%, and theIMF expects it to be 2.2% for 2014 as a whole. GDP in the third quarter of 2014 grew at a vigorous 5% annual rate, suggesting that aggregate growth for all of 2015 will be above 3%.

So much for Krugman’s predictions. Not one of his New York Timescommentaries in the first half of 2013, when “austerian” deficit cutting was taking effect, forecast a major reduction in unemployment or that economic growth would recover to brisk rates. On the contrary, “the disastrous turn toward austerity has destroyed millions of jobs and ruined many lives,”he argued, with the US Congress exposing Americans to “the imminent threat of severe economic damage from short-term spending cuts.” As a result, “Full recovery still looks a very long way off,” he warned. “And I’m beginning to worry that it may never happen.”

I raise all of this because Krugman took a victory lap in his end-of-2014 column on “The Obama Recovery.” The recovery, according to Krugman, has come not despite the austerity he railed against for years, but because we “seem to have stopped tightening the screws….”

That is an incredible claim. The budget deficit has been brought down sharply, and unemployment has declined. Yet Krugman now says that everything has turned out just as he predicted. (emphasis added)

In the face of such withering and irrefutable criticism, Krugman retreated to the position that his wonderful model had been vindicated by the bulk of the sample, with scatterplots of European countries and their respective fiscal stance and growth rates. He went so far as to say that Sachs “really should know better” than to have expected Krugman’s predictions about austerity to actually hold for any given country (such as the United States).

Besides the audacity of downplaying the confidence with which he had warned of the “fiscal doomsday machine” that would strike the United States, Krugman’s response to Sachs also drips with hypocrisy. Krugman has been merciless in pointing to specific economists (including yours truly) who were wrong in their predictions about consumer price inflation in the United States. When we botched a specific call about the US economy for a specific time period, that was enough in Krugman’s book for us to quit our day jobs and start delivering pizza. There was no question that getting things wrong about one specific country was enough to discredit our model of the economy. The fact that guys like me clung to our policy views after being wrong about our predictions on the United States showed that not only were we bad economists, but we were evil (and possibly racist), too.

Modeling consumer price inflation

I’ve saved the best for last. The casual reader of Krugman’s columns would think that the one area where he surely wiped the deck with his foes was on predictions of consumer price inflation. After all, plenty of anti-Keynesians like me predicted that the consumer price index (among other prices) would rise rapidly, and we were wrong. So Krugman’s model did great on this criterion, right?

Actually, no, it didn’t; his model was totally wrong as well. You see, coming into the Great Recession, Krugman’s framework of “the inflation-adjusted Phillips curve predict[ed] not just deflation, but accelerating deflation in the face of a really prolonged economic slump” (emphasis Krugman’s). And it wasn’t merely the academic model predicting (price) deflation; Krugman himself warned in February 2010 that the United States could experience price deflation in the near future. He ended with, “Japan, here we come” — a reference to that country’s long bout with actual consumer price deflation.

Well, that’s not what happened. About seven months after he warned of continuing price disinflation and the possibility of outright deflation, Krugman’s preferred measures of CPI turned around sharply, more than doubling in a short period, returning almost to pre-recession levels.

Conclusion

Krugman, armed with his Keynesian model, came into the Great Recession thinking that (a) nominal interest rates can’t go below 0 percent, (b) total government spending reductions in the United States amid a weak recovery would lead to a double dip, and (c) persistently high unemployment would go hand in hand with accelerating price deflation. Because of these macroeconomic views, Krugman recommended aggressive federal deficit spending.

As things turned out, Krugman was wrong on each of the above points: we learned (and this surprised me, too) that nominal rates could go persistently negative, that the US budget “austerity” from 2011 onward coincided with a strengthening recovery, and that consumer prices rose modestly even as unemployment remained high. Krugman was wrong on all of these points, and yet his policy recommendations didn’t budge an iota over the years.

Far from changing his policy conclusions in light of his model’s botched predictions, Krugman kept running victory laps, claiming his model had been “right about everything.” He further speculated that the only explanation for his opponents’ unwillingness to concede defeat was that they were evil or stupid.

What a guy. What a scientist.


Robert P. Murphy

Robert P. Murphy is senior economist with the Institute for Energy Research. He is author of Choice: Cooperation, Enterprise, and Human Action (Independent Institute, 2015).

Reich Is Wrong on the Minimum Wage by DONALD BOUDREAUX

Watching Robert Reich’s new video in which he endorses raising the minimum wage by $7.75 per hour – to $15 per hour – is painful. It hurts to encounter such rapid-fire economic ignorance, even if the barrage lasts for only two minutes.

Perhaps the most remarkable flaw in this video is Reich’s manner of addressing the bedrock economic objection to the minimum wage – namely, that minimum wage prices some low-skilled workers out of jobs.

Ignoring supply-and-demand analysis (which depicts the correct common-sense understanding that the higher the minimum wage, the lower is the quantity of unskilled workers that firms can profitably employ), Reich asserts that a higher minimum wage enables workers to spend more money on consumer goods which, in turn, prompts employers to hire more workers.

Reich apparently believes that his ability to describe and draw such a “virtuous circle” of increased spending and hiring is reason enough to dismiss the concerns of “scare-mongers” (his term) who worry that raising the price of unskilled labor makes such labor less attractive to employers.

Ignore (as Reich does) that any additional amounts paid in total to workers mean lower profits for firms or higher prices paid by consumers – and, thus, less spending elsewhere in the economy by people other than the higher-paid workers.

Ignore (as Reich does) the extraordinarily low probability that workers who are paid a higher minimum wage will spend all of their additional earnings on goods and services produced by minimum-wage workers.

Ignore (as Reich does) the impossibility of making people richer simply by having them circulate amongst themselves a larger quantity of money.

(If Reich is correct that raising the minimum wage by $7.75 per hour will do nothing but enrich all low-wage workers to the tune of $7.75 per hour because workers will spend all of their additional earnings in ways that make it profitable for their employers to pay them an additional $7.75 per hour, then it can legitimately be asked: Why not raise the minimum wage to $150 per hour? If higher minimum wages are fully returned to employers in the form of higher spending by workers as Reich theorizes, then there is no obvious limit to the amount by which government can hike the minimum wage before risking an increase in unemployment.)

Focus instead on Reich’s apparent complete ignorance of the important concept of the elasticity of demand for labor.  This concept refers to the responsiveness of employers to changes in wage rates. It’s true that if employers’ demand for unskilled workers is “inelastic,” then a higher minimum wage would indeed put more money into the pockets of unskilled workers as a group. The increased pay of workers who keep their jobs more than offsets the lower pay of worker who lose their jobs. Workers as a group could then spend more in total.

But if employers’ demand for unskilled workers is “elastic,” then raising the minimum wage reduces, rather than increases, the amount of money in the pockets of unskilled workers as a group. When the demand for labor is elastic, the higher pay of those workers fortunate enough to keep their jobs is more than offset by the lower pay of workers who lose their jobs. So total spending by minimum-wage workers would likely fall, not rise.

By completely ignoring elasticity, Reich assumes his conclusion. That is, he simply assumes that raising the minimum wage raises the total pay of unskilled workers (and, thereby, raises the total spending of such workers).

Yet whether or not raising the minimum wage has this effect is among the core issues in the debate over the merits of minimum-wage legislation. Even if (contrary to fact) increased spending by unskilled workers were sufficient to bootstrap up the employment of such workers, raising the minimum wage might well reduce the total amount of money paid to unskilled workers and, thus, lower their spending.

So is employers’ demand for unskilled workers more likely to be elastic or inelastic? The answer depends on how much the minimum wage is raised. If it were raised by, say, only five percent, it might be inelastic, causing only a relatively few worker to lose their jobs and, thus, the total take-home pay of unskilled workers as a group to rise.

But Reich calls for an increase in the minimum wage of 107 percent! It’s impossible to believe that more than doubling the minimum wage would not cause a huge negative response by employers.

Such an assumption – if it described reality – would mean that unskilled workers are today so underpaid (relative to their productivity) that their employers are reaping gigantic windfall profits off of such workers.

But the fact that we see increasing automation of low-skilled tasks, as well as continuing high rates of unemployment of teenagers and other unskilled workers, is solid evidence that the typical low-wage worker is not such a bountiful source of profit for his or her employer.

Reich’s video is infected, from start to finish, with too many other errors to count.  I hope that other sensible people will take the time to expose them all.

Donald Boudreaux

Donald Boudreaux is a professor of economics at George Mason University, a former FEE president, and the author of Hypocrites and Half-Wits.

EDITORS NOTE: Here’s how Reich cherry-picked his data to claim that the minimum wage is “historically low” right now; here’s why Reich is wrong about wages “decoupling” from productivity; here’s why Reich is wrong about welfare “subsidizing” low-wage employers; here’s why Reich is wrong that Walmart raising wages proves that the minimum wage “works”; Reich is wrong (again) about who makes minimum wage; and here’s a collection of recent news about the damage minimum wage hikes have caused.

This post first appeared at Cato.org, while Cafe Hayek was down for repairs. 

Real Heroes: Ludwig Erhard — The Man Who Made Lemonade from Lemons by LAWRENCE W. REED

How rare and refreshing it is for the powerful to understand the limitations of power, to actually repudiate its use and, in effect, give it back to the myriad individuals who make up society. George Washington was such a person. Cicero was another. So was Ludwig Erhard, who did more than any other man or woman to denazify the German economy after World War II. By doing so, he gave birth to a miraculous economic recovery.

“In my eyes,” Erhard confided in January 1962, “power is always dull, it is dangerous, it is brutal and ultimately even dumb.”

By every measure, Germany was a disaster in 1945 — defeated, devastated, divided, and demoralized — and not only because of the war. The Nazis, of course, were socialist (the name derives from National Socialist German Workers Party), so for more than a decade, the economy had been “planned” from the top. It was tormented with price controls, rationing, bureaucracy, inflation, cronyism, cartels, misdirection of resources, and government command of important industries. Producers made what the planners ordered them to. Service to the state was the highest value.

Thirty years earlier, a teenage Ludwig Erhard heard his father argue for classical-liberal values in discussions with fellow businessmen. A Bavarian clothing and dry goods entrepreneur, the elder Wilhelm actively opposed the kaiser’s increasing cartelization of the German economy. Erhard biographer Alfred C. Mierzejewski writes of Ludwig’s father,

While by no means wealthy, he became a member of the solid middle class that made its living through hard work and satisfying the burgeoning consumer demand of the period, rather than by lobbying for government subsidies or protection as many Junkers did to preserve their farms and many industrialists did to fend off foreign competition.

Young Ludwig resented the burdens that government imposed on honest and independent businessmen like his father. He developed a lifelong passion for free market competition because he understood what F.A. Hayek would express so well in the 1940s: “The more the state plans, the more difficult planning becomes for the individual.”

Severely wounded by an Allied artillery shell in Belgium in 1918, Ludwig’s liberal values were strengthened by his experience in the bloody and futile First World War. After the tumultuous hyperinflation that gripped Germany in the years after the war, he earned a PhD in economics, took charge of the family business, and eventually headed a marketing research institute, which gave him opportunities to write and speak about economic issues.

Hitler’s rise to power in the 1930s deeply disturbed Erhard. He refused to have anything to do with Nazism or the Nazi Party, even quietly supporting resistance to the regime as the years wore on. The Nazis saw to it that he lost his job in 1942, when he wrote a paper outlining his ideas for a free, postwar economy. He spent the next few years as a business consultant.

In 1947, Erhard achieved the chairmanship of an important monetary commission. It proved to be a vital stepping stone to the position of director of economics for the Bizonal Economic Council, a creation of the American and British occupying authorities. It was there that he could finally put his views into policy and transform his country in the process.

Erhard’s beliefs had by this time solidified into unalterable convictions. Currency must be sound and stable. Collectivism was deadly nonsense that choked the creative individual. Central planning was a ruse and a delusion. State enterprises could never be an acceptable substitute for the dynamism of competitive, entrepreneurial markets. Envy and wealth redistribution were evils.

“It is much easier to give everyone a bigger piece from an ever growing cake,” he said, “than to gain more from a struggle over the division of a small cake, because in such a process every advantage for one is a disadvantage for another.”

Erhard advocated a fair field and no favors. His prescription for recovery? The state would set the rules of the game and otherwise leave people alone to wrench the German economy out of its doldrums. The late economist William H. Peterson reveals what happened next:

In 1948, on a June Sunday, without the knowledge or approval of the Allied military occupation authorities (who were of course away from their offices), West German Economics Minister Ludwig Erhard unilaterally and bravely issued a decree wiping out rationing and wage-price controls and introducing a new hard currency, the Deutsche-mark. The decree was effective immediately. Said Erhard to the stunned German people: “Now your only ration coupon is the mark.”

The American, British, and French authorities, who had appointed Erhard to his post, were aghast. Some charged that he had exceeded his defined powers, that he should be removed. But the deed was done. Said U.S. Commanding General Lucius Clay: “Herr Erhard, my advisers tell me you’re making a terrible mistake.” “Don’t listen to them, General,” Erhard replied, “my advisers tell me the same thing.”

General Clay protested that Erhard had “altered” the Allied price-control program, but Erhard insisted he hadn’t altered price controls at all. He had simply “abolished” them. In the weeks and months to follow, he issued a blizzard of deregulatory orders. He slashed tariffs. He raised consumption taxes, but more than offset them with a 15 percent cut in income taxes. By removing disincentives to save, he prompted one of the highest saving rates of any Western industrialized country. West Germany was awash in capital and growth, while communist East Germany languished. Economist David Henderson writes that Erhard’s motto could have been: “Don’t just sit there;undo something.”

The results were stunning. As Robert A. Peterson writes,

Almost immediately, the German economy sprang to life. The unemployed went back to work, food reappeared on store shelves, and the legendary productivity of the German people was unleashed. Within two years, industrial output tripled. By the early 1960s, Germany was the third greatest economic power in the world. And all of this occurred while West Germany was assimilating hundreds of thousands of East German refugees.

It was a pace of growth that dwarfed that of European countries that received far more Marshall Plan aid than Germany ever did.

The term “German economic miracle” was widely used and understood as it happened in the 1950s before the eyes of the world, but Erhard himself never thought of it as such. In his 1958 book, Prosperity through Competition, he opined, “What has taken place in Germany … is anything but a miracle. It is the result of the honest efforts of a whole people who, in keeping with the principles of liberty, were given the opportunity of using personal initiative and human energy.”

The temptations of the welfare state in the 1960s derailed some of Erhard’s reforms. His three years as chancellor (1963–66) were less successful than his tenure as an economics minister. But his legacy was forged in that decade and a half after the war’s end. He forever answered the question, “What do you do with an economy in ruins?” with the simple, proven and definitive recipe: “Free it.”

For additional information, see:

David R. Henderson on the “German Economic Miracle
Alfred C. Mierzejewski’s Ludwig Erhard: A Biography
Robert A. Peterson on “Origins of the German Economic Miracle
Richard Ebeling on “The German Economic Miracle and the Social Market Economy
William H. Peterson on “Will More Dollars Save the World?

Lawrence W. Reed

Lawrence W. (“Larry”) Reed became president of FEE in 2008 after serving as chairman of its board of trustees in the 1990s and both writing and speaking for FEE since the late 1970s.

EDITORS NOTE: Each week, Mr. Reed will relate the stories of people whose choices and actions make them heroes. See the table of contents for previous installments.

Razing the Bar: The bar exam protects a cartel of lawyers, not their clients by Allen Mendenhall

The bar exam was designed and continues to operate as a mechanism for excluding the lower classes from participation in the legal services market. Elizabeth Olson of the New York Times reports that the bar exam as a professional standard “is facing a new round of scrutiny — not just from the test takers but from law school deans and some state legal establishments.”

This is a welcome development.

Testing what, exactly?

The dean of the University of San Diego School of Law, Stephen C. Ferrulo, complains to the Times that the bar exam “is an unpredictable and unacceptable impediment for accessibility to the legal profession.” Ferrulo is right: the bar exam is a barrier to entry, a form of occupational licensure that restricts access to a particular vocation and reduces market competition.

The bar exam tests the ability to take tests, not the ability to practice law. The best way to learn the legal profession is through tried experience and practical training, which, under our current system, are delayed for years, first by the requirement that would-be lawyers graduate from accredited law schools and second by the bar exam and its accompanying exam for professional fitness.

Freedom of contract

The 19th-century libertarian writer Lysander Spooner, himself a lawyer, opposed occupational licensure as a violation of the freedom of contract, arguing that, once memorialized, all agreements between mutually consenting parties “should not be subjects of legislative caprice or discretion.”

“Men may exercise at discretion their natural rights to enter into all contracts whatsoever that are in their nature obligatory,” he wrote, adding that this principle would prohibit all laws “forbidding men to make contracts by auction without license.”

In more recent decades, Milton Friedman disparaged occupational licensure as “another example of governmentally created and supported monopoly on the state level.” For Friedman, occupational licensure was no small matter. “The overthrow of the medieval guild system,” he said, was an indispensable early step in the rise of freedom in the Western world. It was a sign of the triumph of liberal ideas.… In more recent decades, there has been a retrogression, an increasing tendency for particular occupations to be restricted to individuals licensed to practice them by the state.

The bar exam is one of the most notorious examples of this “increasing tendency.”

Protecting lawyers from the poor

The burden of the bar exam falls disproportionately on low-income earners and ethnic minorities who lack the ability to pay for law school or to assume heavy debts to earn a law degree. Passing a bar exam requires expensive bar-exam study courses and exam fees, to say nothing of the costly applications and paperwork that must be completed in order to be eligible to sit for the exam. The average student-loan debt for graduates of many American law schools now exceeds $150,000, while half of all lawyers make less than $62,000 per year, a significant drop since a decade ago.

Recent law-school graduates do not have the privilege of reducing this debt after they receive their diploma; they must first spend three to four months studying for a bar exam and then, having taken the exam, must wait another three to four months for their exam results. More than half a year is lost on spending and waiting rather than earning, or at least earning the salary of a licensed attorney (some graduates work under the direction of lawyers pending the results of their bar exam).

When an individual learns that he or she has passed the bar exam, the congratulations begin with an invitation to pay a licensing fee and, in some states, a fee for a mandatory legal-education course for newly admitted attorneys. These fees must be paid before the individual can begin practicing law.

The exam is working — but for whom?

What’s most disturbing about this system is that it works precisely as it was designed to operate.  State bar associations and bar exams are products of big-city politics during the Progressive Era. Such exams existed long before the Progressive Era — Delaware’s bar exam dates back to 1763 — but not until the Progressive Era were they increasingly formalized and institutionalized and backed by the enforcement power of various states.

Threatened by immigrant workers and entrepreneurs who were determined to earn their way out of poverty and obscurity, lawyers with connections to high-level government officials in their states sought to form guilds to prohibit advertising and contingency fees and other creative methods for gaining clients and driving down the costs of legal services. Establishment lawyers felt the entrepreneurial up-and-comers were demeaning the profession and degrading the reputation of lawyers by transforming the practice of law into a business industry that admitted ethnic minorities and others who lacked rank and class. Implementing the bar exam allowed these lawyers to keep allegedly unsavory people and practices out of the legal community and to maintain the high costs of fees and services.

Protecting the consumer

In light of this ugly history, the paternalistic response of Erica Moeser to the New York Times is particularly disheartening. Moeser is the president of the National Conference of Bar Examiners. She says that the bar exam is “a basic test of fundamentals” that is justified by “protecting the consumer.” But isn’t it the consumer above all who is harmed by the high costs of legal services that are a net result of the bar exam and other anticompetitive practices among lawyers? To ask the question is to answer it. It’s also unclear how memorizing often-archaic rules to prepare for standardized, high-stakes multiple-choice tests that are administered under stressful conditions will in any way improve one’s ability to competently practice law.

The legal community and consumers of legal services would be better served by the apprenticeship model that prevailed long before the rise of the bar exam. Under this model, an aspiring attorney was tutored by experienced lawyers until he or she mastered the basics and demonstrated his or her readiness to represent clients. The high cost of law school was not a precondition; young people spent their most energetic years doing real work and gaining practical knowledge. Developing attorneys had to establish a good reputation and keep their costs and fees to a minimum to attract clients, gain trust, and maintain a living.

The rise in technology and social connectivity in our present era also means that reputation markets have improved since the early 20th century, when consumers would have had a more difficult time learning by word-of-mouth and secondhand report that one lawyer or group of lawyers consistently failed their clients — or ripped them off. Today, with services like Amazon, eBay, Uber, and Airbnb, consumers are accustomed to evaluating products and service providers online and for wide audiences.  Learning about lawyers’ professional reputations should be quick and easy, a matter of a simple Internet search.  With no bar exam, the sheer ubiquity and immediacy of reputation markets could weed out the good lawyers from the bad, thereby transferring the mode of social control from the legal cartel to the consumers themselves.

Criticism of the high costs of legal bills has not gone away in recent years, despite the drop in lawyers’ salaries and the saturation of the legal market with too many attorneys. The quickest and easiest step toward reducing legal costs is to eliminate bar exams. The public would see no marked difference in the quality of legal services if the bar exam were eliminated, because, among other things, the bar exam doesn’t teach or test how to deliver those legal services effectively.

It will take more than just the grumbling of anxious, aspiring attorneys to end bar-exam hazing rituals. That law school deans are realizing the drawbacks of the bar exam is a step in the right direction. But it will require protests from outside the legal community — from the consumers of legal services — to effect any meaningful change.

Allen Mendenhall

Allen Mendenhall is the author of Literature and Liberty: Essays in Libertarian Literary Criticism (Rowman & Littlefield / Lexington Books, 2014). Visit his website at AllenMendenhall.com.

Decentralization: Why Dumb Networks Are Better

The smart choice is innovation at the edge by ANDREAS ANTONOPOULOS…

“Every device employed to bolster individual freedom must have as its chief purpose the impairment of the absoluteness of power.” — Eric Hoffer

In computer and communications networks, decentralization leads to faster innovation, greater openness, and lower cost. Decentralization creates the conditions for competition and diversity in the services the network provides.

But how can you tell if a network is decentralized, and what makes it more likely to be decentralized? Network “intelligence” is the characteristic that differentiates centralized from decentralized networks — but in a way that is surprising and counterintuitive.

Some networks are “smart.” They offer sophisticated services that can be delivered to very simple end-user devices on the “edge” of the network. Other networks are “dumb” — they offer only a very basic service and require that the end-user devices are intelligent. What’s smart about dumb networks is that they push innovation to the edge, giving end-users control over the pace and direction of innovation. Simplicity at the center allows for complexity at the edge, which fosters the vast decentralization of services.

Surprisingly, then, “dumb” networks are the smart choice for innovation and freedom.

The telephone network used to be a smart network supporting dumb devices (telephones). All the intelligence in the telephone network and all the services were contained in the phone company’s switching buildings. The telephone on the consumer’s kitchen table was little more than a speaker and a microphone. Even the most advanced touch-tone telephones were still pretty simple devices, depending entirely on the network services they could “request” through beeping the right tones.

In a smart network like that, there is no room for innovation at the edge. Sure, you can make a phone look like a cheeseburger or a banana, but you can’t change the services it offers. The services depend entirely on the central switches owned by the phone company. Centralized innovation means slow innovation. It also means innovation directed by the goals of a single company. As a result, anything that doesn’t seem to fit the vision of the company that owns the network is rejected or even actively fought.

In fact, until 1968, AT&T restricted the devices allowed on the network to a handful of approved devices. In 1968, in a landmark decision, the FCC ruled in favor of the Carterfone, an acoustic coupler device for connecting two-way radios to telephones, opening the door for any consumer device that didn’t “cause harm to the system.”

That ruling paved the way for the answering machine, the fax machine, and the modem. But even with the ability to connect smarter devices to the edge, it wasn’t until the modem that innovation really accelerated. The modem represented a complete inversion of the architecture: all the intelligence was moved to the edge, and the phone network was used only as an underlying “dumb” network to carry the data.

Did the telecommunications companies welcome this development? Of course not! They fought it for nearly a decade, using regulation, lobbying, and legal threats against the new competition. In some countries, modem calls across international lines were automatically disconnected to prevent competition in the lucrative long-distance market. In the end, the Internet won. Now, almost the entire phone network runs as an app on top of the Internet.

The Internet is a dumb network, which is its defining and most valuable feature. The Internet’s protocol (transmission control protocol/Internet protocol, or TCP/IP) doesn’t offer “services.” It doesn’t make decisions about content. It doesn’t distinguish between photos and text, video and audio. It doesn’t have a list of approved applications. It doesn’t even distinguish between client and server, user and host, or individual versus corporation. Every IP address is an equal peer.

TCP/IP acts as an efficient pipeline, moving data from one point to another. Over time, it has had some minor adjustments to offer some differentiated “quality of service” capabilities, but other than that, it remains, for the most part, a dumb data pipeline. Almost all the intelligence is on the edge — all the services, all the applications are created on the edge-devices. Creating a new application does not involve changing the network. The Web, voice, video, and social media were all created as applications on the edge without any need to modify the Internet protocol.

So the dumb network becomes a platform for independent innovation, without permission, at the edge. The result is an incredible range of innovations, carried out at an even more incredible pace. People interested in even the tiniest of niche applications can create them on the edge. Applications that only have two participants only need two devices to support them, and they can run on the Internet. Contrast that to the telephone network where a new “service,” like caller ID, had to be built and deployed on every company switch, incurring maintenance cost for every subscriber. So only the most popular, profitable, and widely used services got deployed.

The financial services industry is built on top of many highly specialized and service-specific networks. Most of these are layered atop the Internet, but they are architected as closed, centralized, and “smart” networks with limited intelligence on the edge.

Take, for example, the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the international wire transfer network. The consortium behind SWIFT has built a closed network of member banks that offers specific services: secure messages, mostly payment orders. Only banks can be members, and the network services are highly centralized.

The SWIFT network is just one of dozens of single-purpose, tightly controlled, and closed networks offered to financial services companies such as banks, brokerage firms, and exchanges. All these networks mediate the services by interposing the service provider between the “users,” and they allow minimal innovation or differentiation at the edge — that is, they are smart networks serving mostly dumb devices.

Bitcoin is the Internet of money. It offers a basic dumb network that connects peers from anywhere in the world. The bitcoin network itself does not define any financial services or applications. It doesn’t require membership registration or identification. It doesn’t control the types of devices or applications that can live on its edge. Bitcoin offers one service: securely time-stamped scripted transactions. Everything else is built on the edge-devices as an application. Bitcoin allows any application to be developed independently, without permission, on the edge of the network. A developer can create a new application using the transactional service as a platform and deploy it on any device. Even niche applications with few users — applications never envisioned by the bitcoin protocol creator — can be built and deployed.

Almost any network architecture can be inverted. You can build a closed network on top of an open network or vice versa, although it is easier to centralize than to decentralize. The modem inverted the phone network, giving us the Internet. The banks have built closed network systems on top of the decentralized Internet. Now bitcoin provides an open network platform for financial services on top of the open and decentralized Internet. The financial services built on top of bitcoin are themselves open because they are not “services” delivered by the network; they are “apps” running on top of the network. This arrangement opens a market for applications, putting the end user in a position of power to choose the right application without restrictions.

What happens when an industry transitions from using one or more “smart” and centralized networks to using a common, decentralized, open, and dumb network? A tsunami of innovation that was pent up for decades is suddenly released. All the applications that could never get permission in the closed network can now be developed and deployed without permission. At first, this change involves reinventing the previously centralized services with new and open decentralized alternatives. We saw that with the Internet, as traditional telecommunications services were reinvented with email, instant messaging, and video calls.

This first wave is also characterized by disintermediation — the removal of entire layers of intermediaries who are no longer necessary. With the Internet, this meant replacing brokers, classified ads publishers, real estate agents, car salespeople, and many others with search engines and online direct markets. In the financial industry, bitcoin will create a similar wave of disintermediation by making clearinghouses, exchanges, and wire transfer services obsolete. The big difference is that some of these disintermediated layers are multibillion dollar industries that are no longer needed.

Beyond the first wave of innovation, which simply replaces existing services, is another wave that begins to build the applications that were impossible with the previous centralized network. The second wave doesn’t just create applications that compare to existing services; it spawns new industries on the basis of applications that were previously too expensive or too difficult to scale. By eliminating friction in payments, bitcoin doesn’t just make better payments; it introduces market mechanisms and price discovery to economic activities that were too small or inefficient under the previous cost structure.

We used to think “smart” networks would deliver the most value, but making the network “dumb” enabled a massive wave of innovation. Intelligence at the edge brings choice, freedom, and experimentation without permission. In networks, “dumb” is better.

ABOUT ANDREAS ANTONOPOULOS

Andreas M. Antonopoulos is a technologist and serial entrepreneur who advises companies on the use of technology and decentralized digital currencies such as bitcoin.

Do You Have the Civil Disobedience App?

You might be downloading tomorrow’s law by MAX BORDERS…

If the injustice is part of the necessary friction of the machine of government, let it go, let it go: perchance it will wear smooth — certainly the machine will wear out… but if it is of such a nature that it requires you to be the agent of injustice to another, then I say, break the law. Let your life be a counter-friction to stop the machine. What I have to do is to see, at any rate, that I do not lend myself to the wrong which I condemn. 

 Henry David Thoreau

In the peer-to-peer revolution, the most important elections will happen outside the voting booth. And the most important laws won’t be written by lawmakers.

Consider this: The first time you hopped into a Lyft or an Uber, there was probably, at the very least, a legal gray area associated with that trip. And yet, in your bones, didn’t you think that what you were doing was just, even if it wasn’t yet clearly legal?

If you felt that way, I suspect you weren’t alone.

Today, ridesharing apps are operating in most major cities around the country. And municipalities are having to play catch-up because the people have built massive constituencies around these new services.

This is just one example of what Princeton political scientist James C. Scott calls “Irish democracy,” where people simply stop paying attention to some rule (or ruler) because it has outlived its usefulness.

One need not have an actual conspiracy to achieve the practical effects of a conspiracy. More regimes have been brought, piecemeal, to their knees by what was once called “Irish Democracy,” the silent, dogged resistance, withdrawal, and truculence of millions of ordinary people, than by revolutionary vanguards or rioting mobs.

Now, let’s be clear: the right rules are good things. Laws are like our social operating system, and we need them. But we don’t need all of them, much less all of them to stick around forever. And like our operating systems, our laws need updating. Shouldn’t legal updates happen not by waiting around on politicians but in real time?

“But Max,” you might be thinking. “What about the rule of law? You have to change the law through legitimate processes.”

And that’s not unreasonable. After all, we don’t want mob rule, and we don’t want just anyone to be able to change the law willy-nilly — especially those laws that cover our basic rights and freedoms. There is an important distinction, however, between justice and law, one that’s never easy to unpack. But Henry David Thoreau said it well, when he wrote,

Unjust laws exist; shall we be content to obey them, or shall we endeavor to amend them, and obey them until we have succeeded, or shall we transgress them at once? Men generally, under such a government as this, think that they ought to wait until they have persuaded the majority to alter them. They think that, if they should resist, the remedy would be worse than the evil. But it is the fault of the government itself that the remedy is worse than the evil. It makes it worse. Why is it not more apt to anticipate and provide for reform? Why does it not cherish its wise minority? Why does it cry and resist before it is hurt? Why does it not encourage its citizens to be on the alert to point out its faults, and do better than it would have them?

Today’s peer-to-peer civil disobedience is tomorrow’s emergent law.

In other words, the way the best law has always come about is not through a few wise rulers getting together and writing up statutes; rather, it emerges among people interacting with each other and wanting to avoid conflict. When peaceful people are engaging in peaceful activity, they want to keep it that way. And when people find new and creative ways to interact peacefully, old laws can be obstructions.

So as we engage in peer-to-peer civil disobedience, we are making choices that are leading to the emergence of new law, however slowly and clumsily it follows on. This is a beautiful process, because it requires not the permission of rulers, but rather the assent of peer communities. It is rather like democracy on steroids, except we don’t have to send our prayers up through the voting booth in November.

Legal theorist Bruce Benson calls this future law the “Law Merchant.” He describes matters thus:

A Law Merchant evolves whenever commerce emerges. Practices that facilitated emergence of commerce in medieval Europe were replayed in colonial America, and they are being replayed in Eastern Europe, Eastern Asia, Latin America, and cyberspace. Law Merchant arrangements also support “underground” economic activity when states constrain above-ground market development.

It might be a while before we evolve away from our outmoded system of sending politicians to capitals to make statutes. And the issue of lawmakers playing catch-up with emergent systems may be awkward and kludgy for a while. But when we think that the purpose of law is to help people interact peacefully, peer-to-peer civil disobedience might be a necessary ingredient in reweaving the law for the sake of human flourishing.

ABOUT MAX BORDERS

Max Borders is the editor of The Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

The Garage That Couldn’t Be Boarded Up Uber and the jitney … everything old is new again by SARAH SKWIRE

August Wilson. Jitney. 1979.

Last December, I used Uber for the first time. I downloaded the app onto my phone, entered my name, location, and credit card number, and told them where my daughters and I needed to go. The driver picked us up at my home five minutes later. I was able to access reviews that other riders had written for the same driver, to see a photograph of him and of the car that he would be using to pick me up, and to pay and tip him without juggling cash and credit cards and my two kids. Like nearly everyone else I know, I instantly became a fan of this fantastic new invention.

In January, I read Thomas Sowell’s Knowledge and Decisions for the first time. In chapter 8, Sowell discusses the early 20th-century rise of “owner operated bus or taxi services costing five cents and therefore called ‘jitneys,’ the current slang for nickels.” Sowell takes his fuller description of jitneys from transportation economist George W. Hilton’s “American Transportation Planning.”

The jitneys … essentially provided a competitive market in urban transportation with the usual characteristics of rapid entry and exit, quick adaptation to changes in demand, and, in particular,  excellent adaptation to peak load demands. Some 60 percent of the jitneymen were part-time operators, many of whom simply carried passengers for a nickel on trips between home and work.

It sounded strangely familiar.

In February, I read August Wilson’s play, Jitney, written in 1979, about a jitney car service operating in Pittsburgh in the 1970s. As we watch the individual drivers deal with their often tumultuous personal relationships, we also hear about their passengers. The jitney drivers take people to work, to the grocery store, to the pawnshop, to the bus station, and on a host of other unspecified errands. They are an integral part of the community. Like the drivers in Sean Malone’s documentary No Van’s Land, they provide targeted transportation services to a neighborhood under served by public transportation. We see the drivers in Jitney take pride in the way they fit into and take care of their community.

If we gonna be running jitneys out of here we gonna do it right.… I want all the cars inspected. The people got a right if you hauling them around in your car to expect the brakes to work. Clean out your trunk. Clean out the interior of your car. Keep your car clean. The people want to ride in a clean car. We providing a service to the community. We ain’t just giving rides to people. We providing a service.

That service is threatened when the urban planners and improvers at the Pittsburgh Renewal Council decide to board up the garage out of which the jitney service operates and much of the surrounding neighborhood. The drivers are skeptical that the improvements will ever really happen.

Turnbo: They supposed to build a new hospital down there on Logan Street. They been talking about that for the longest while. They supposed to build another part of the Irene Kaufman Settlement House to replace the part they tore down. They supposed to build some houses down on Dinwidee.

Becker: Turnbo’s right. They supposed to build some houses but you ain’t gonna see that. You ain’t gonna see nothing but the tear-down. That’s all I ever seen.

The drivers resolve, in the end, to call a lawyer and refuse to be boarded up. “We gonna run jitneys out of here till the day before the bulldozer come. Ain’t gonna be no boarding up around here! We gonna fight them on that.” They know that continuing to operate will allow other neighborhood businesses to stay open as well. They know that the choice they are offered is not between an improved neighborhood and an unimproved one, but between an unimproved neighborhood and no neighborhood at all. They know that their jitney service keeps their neighborhood running and that it improves the lives of their friends and neighbors in a way that boarded up buildings and perpetually incomplete urban planning projects never will.

Reading Sowell’s book and Wilson’s play in such close proximity got me thinking. Uber isn’t a fantastic new idea. It’s a fantastic old idea that has returned because the omnipresence of smartphones has made running a jitney service easier and more effective. Uber drivers and other ride-sharing services, as we have all read and as No Van’s Land demonstrates so effectively, are subject to protests and interference by competitors, to punitive regulation from local governments, and to a host of other challenges to their enterprise. This push back is nothing new. Sowell notes, “The jitneys were put down in every American city to protect the street railways and, in particular, to perpetuate the cross-subsidization of the street railways’ city-wide fare structures.”

Despite these common problems, Uber and other 21st-century jitney drivers do not face the major challenge that the drivers in Jitney do. They do not need to operate from a centralized location with a phone. Now that we all have phones in our pockets, the Uber “garage” is everywhere. It can’t be boarded up.

ABOUT SARAH SKWIRE

 Sarah Skwire is a fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.