Posts

5 Reasons the FDA’s Ban on Trans Fat Is a Big Deal by Walter Olson

The Obama administration’s Food and Drug Administration today announced a near-ban, in the making since 2013, on the use of partially hydrogenated vegetable fats (“trans fats”) in American food manufacturing.

Specifically, the FDA is knocking trans fats off the Generally Recognized as Safe (GRAS) list. This is a big deal and here are some reasons why:

1. It’s frank paternalism. Like high-calorie foods or alcoholic beverages, trans fats have marked risks when consumed in quantity over long periods, smaller risks in moderate and occasional use, and tiny risks when used in tiny quantities. The FDA intends to forbid the taking of even tiny risks, no matter how well disclosed.

2. The public doesn’t agree.2013 Reason-RUPE poll found majorities of all political groups felt consumers should be left free to choose on trans fats.  Even in heavily governed places like New York City and California, where the political class bulldozed through restaurant bans some years back, there was plenty of resentment.

3. The public is also perfectly capable of recognizing and acting on nutritional advances on its own. Trans fats have gone out of style and consumption has dropped by 85 percent as consumers have shunned them.

But while many products have been reformulated to omit trans fats, their versatile qualities still give them an edge in such specialty applications as frozen pizza crusts, microwave popcorn, and the sprinkles used atop cupcakes and ice cream. Food companies tried to negotiate to keep some of these uses available, especially in small quantities, but apparently mostly failed.

4. Government doesn’t always know best, nor do its friends in “public health.” The story has often been told of how dietary reformers touted trans fats from the 1950s onward as a safer alternative to animal fats and butter.

Public health activists and various levels of government hectored consumers and restaurants to embrace the new substitutes. We now know this was a bad idea: trans fats appear worse for cardiovascular health than what they replaced. And the ingredients that will replace minor uses of trans fats – tropical palm oil is one – have problems of their own.

5. Even if you never plan to consume a smidgen of trans fat ever again, note well: many public health advocates are itching for the FDA to limit allowable amounts of salt, sugar, caffeine, and so forth in food products. Many see this as their big pilot project and test case.

But when it winds up in court, don’t be surprised if some courtroom spectators show up wearing buttons with the old Sixties slogan: Keep Your Laws Off My Body.


Walter Olson

Walter Olson is a senior fellow at the Cato Institute’s Center for Constitutional Studies.

EDITORS NOTE: This post first appeared at Cato.org.

How Ice Cream Won the Cold War by B.K. Marcus

Richard Nixon stood by a lemon-yellow refrigerator in Moscow and bragged to the Soviet leader: “The American system,” he told Nikita Khrushchev over frosted cupcakes and chocolate layer cake, “is designed to take advantage of new inventions.”

It was the opening day of the American National Exhibition at Sokol’niki Park, and Nixon was representing not just the US government but also the latest products from General Mills, Whirlpool, and General Electric. Assisting him in what would come to be known as the “Kitchen Debates” were attractive American spokesmodels who demonstrated for the Russian crowd the best that capitalism in 1959 had to offer.

Capitalist lifestyle

“This was the first time,” writes British food historian Bee Wilson of the summer exhibition, that “many Russians had encountered the American lifestyle firsthand: the first time they … set eyes on big American refrigerators.”

Laughing and sometimes jabbing fingers at one another, the two men debated the merits of capitalism and communism. Which country had the more advanced technologies? Which way of life was better? The conversation … hinged not on weapons or the space race but on washing machines and kitchen gadgets. (Consider the Fork)

Khrushchev was dismissive. Yes, the Americans had brought some fancy machines with them, but did all this consumer technology actually offer any real advantages?

In his memoirs, he later recalled picking up an automatic lemon squeezer. “What a silly thing … Mr. Nixon! … I think it would take a housewife longer to use this gadget than it would for her to … slice a piece of lemon, drop it into a glass of tea, then squeeze a few drops.”

Producing necessities

That same year, Khrushchev announced that the Soviet economy would overtake the United States in the production of milk, meat, and butter. These were products that made sense to him. He couldn’t deliver — although Soviet farmers were forced to slaughter their breeding herds in an attempt to do so — but the goal itself reveals what the communist leader believed a healthy economy was supposed to do: produce staples like meat and dairy, not luxuries like colorful kitchenware and complex gadgetry for the decadent and lazy.

“Don’t you have a machine,” he asked Nixon, “that puts food in the mouth and presses it down? Many things you’ve shown us are interesting but they are not needed in life. They have no useful purpose. They are merely gadgets.”

Khrushchev was displaying the behavior Ludwig von Mises described in The Anti-Capitalistic Mentality. “They castigate the luxury, the stupidity and the moral corruption of the exploiting classes,” Mises wrote of the socialists. “In their eyes everything that is bad and ridiculous is bourgeois, and everything that is good and sublime is proletarian.”

On display that summer in Moscow was American consumer tech at its most bourgeois. The problem with “castigating the luxury,” as Mises pointed out, is that all “innovation is first a luxury of only a few people, until by degrees it comes into the reach of the many.”

Producing luxuries

It is appropriate that the Kitchen Debate over luxury versus necessity took place among high-end American refrigerators. Refrigeration, as a luxury, is ancient. “There were ice harvests in China before the first millennium BC,” writes Wilson. “Snow was sold in Athens beginning in the fifth century BC. Aristocrats of the seventeenth century spooned desserts from ice bowls, drank wine chilled with snow, and even ate iced creams and water ices. Yet it was only in the nineteenth century in the United States that ice became an industrial commodity.” Only with modern capitalism, in other words, does the luxury reach so rapidly beyond a tiny elite.

“Capitalism,” Mises wrote in Economic Freedom and Interventionism, “is essentially mass production for the satisfaction of the wants of the masses.”

The man responsible for bringing ice to the overheated multitude was a Boston businessman named Frederic Tudor. “History now knows him as ‘the Ice King,’” Steven Johnson writes of Tudor in How We Got to Now: Six Innovations That Made the Modern World, “but for most of his early adulthood he was an abject failure, albeit one with remarkable tenacity.”

Like many wealthy families in northern climes, the Tudors stored blocks of frozen lake water in icehouses, two-hundred-pound ice cubes that would remain marvelously unmelted until the hot summer months arrived, and a new ritual began: chipping off slices from the blocks to freshen drinks [and] make ice cream.

In 1800, when Frederic was 17, he accompanied his ill older brother to Cuba. They were hoping the tropical climate would improve his brother’s health, but it “had the opposite effect: arriving in Havana, the Tudor brothers were quickly overwhelmed by the muggy weather.” They reversed course, but the summer heat chased them back to the American South, and Frederic longed for the cooler climes of New England. That experience “suggested a radical — some would say preposterous — idea to young Frederic Tudor: if he could somehow transport ice from the frozen north to the West Indies, there would be an immense market for it.”

“In a country where at some seasons of the year the heat is almost unsupportable,” Tudor wrote in his journal, “ice must be considered as outdoing most other luxuries.”

Tudor’s folly

Imagine what an early 19th-century version of Khrushchev would have said to the future Ice King. People throughout the world go hungry, and you, Mr. Tudor, want to introduce frozen desserts to the tropics? What of beef? What of butter? The capitalists chase profits rather than producing the necessities.

It’s true that Tudor was pursuing profits, but his idea of ice outdoing “most other luxuries” looked to his contemporaries more like chasing folly than fortune.

The Boston Gazette reported on one of his first shiploads of New England ice: “No joke. A vessel with a cargo of 80 tons of Ice has cleared out from this port for Martinique. We hope this will not prove to be a slippery speculation.”

And at first the skeptics seemed right. Tudor “did manage to make some ice cream,” Johnson tells us. And that impressed a few of the locals. “But the trip was ultimately a complete failure.” The novelty of imported ice was just too novel. Why supply ice where there was simply no demand?

You can’t put a price on failure

In the early 20th century, economists Ludwig von Mises and F.A. Hayek, after years of debate with the Marxists, finally began to convince advocates of socialist central planning that market prices were essential to the rational allocation of scarce resources. Some socialist theorists responded with the idea of using capitalist market prices as a starting point for the central planners, who could then simulate the process of bidding for goods, thereby replacing real markets with an imitation that they believed would be just as good. Capitalism would then be obsolete, an unfortunate stage in the development of greater social justice.

By 1959, Khrushchev could claim, however questionably, that Soviet refrigerators were just as good as the American variety — except for a few frivolous features. But there wouldn’t have been any Soviet fridges at all if America hadn’t led the way in artificial refrigeration, starting with Tudor’s folly a century and a half earlier. If the central planners had been around in 1806 when the Boston Gazette poked fun at Tudor’s slippery speculation, what prices would they have used as the starting point for future innovation? All the smart money was in other ventures, and Tudor was on his way to losing his family’s fortune and landing in debtor’s prison.

Only through stubborn persistence did Tudor refine his idea and continue to innovate while demand slowly grew for what he had to offer.

“Still pursued by his creditors,” Johnson writes, Tudor

began making regular shipments to a state-of-the-art icehouse he had built in Havana, where an appetite for ice cream had been slowly maturing. Fifteen years after his original hunch, Tudor’s ice trade had finally turned a profit. By the 1820s, he had icehouses packed with frozen New England water all over the American South. By the 1830s, his ships were sailing to Rio and Bombay. (India would ultimately prove to be his most lucrative market.)

The world the Ice King made

In the winter of 1846–47, Henry David Thoreau watched a crew of Tudor’s ice cutters at work on Walden Pond.

Thoreau wrote, “The sweltering inhabitants of Charleston and New Orleans, of Madras and Bombay and Calcutta, drink at my well.… The pure Walden water is mingled with the sacred water of the Ganges.”

When Tudor died in 1864, Johnson tells us, he “had amassed a fortune worth more than $200 million in today’s dollars.”

The Ice King had also changed the fortunes of all Americans, and reshaped the country in the process. Khrushchev would later care about butter and beef, but before refrigerated train cars — originally cooled by natural ice — it didn’t matter how much meat and dairy an area could produce if it could only be consumed locally without spoiling. And only with the advent of the home icebox could families keep such products fresh. Artificial refrigeration created the modern city by allowing distant farms to feed the growing urban populations.

A hundred years after the Boston Gazette reported what turned out to be Tudor’s failed speculation, the New York Times would run a very different headline: “Ice Up to 40 Cents and a Famine in Sight”:

Not in sixteen years has New York faced such an iceless prospect as this year. In 1890 there was a great deal of trouble and the whole country had to be scoured for ice. Since then, however, the needs for ice have grown vastly, and a famine is a much more serious matter now than it was then.

“In less than a century,” Johnson observes, “ice had gone from a curiosity to a luxury to a necessity.”

The world that luxury made

Before modern markets, Mises tells us, the delay between luxury and necessity could take centuries, but “from its beginnings, capitalism displayed the tendency to shorten this time lag and finally to eliminate it almost entirely. This is not a merely accidental feature of capitalistic production; it is inherent in its very nature.” That’s why everyone today carries a smartphone — and in a couple of years, almost every wrist will bear a smartwatch.

The Cold War is over, and Khrushchev is no longer around to scoff, but the Kitchen Debate continues as the most visible commercial innovations produce “mere gadgets.” Less visible is the steady progress in the necessities, including the innovations we didn’t know were necessary because we weren’t imagining the future they would bring about. Even less evident are all the failures. We talk of profits, but losses drive innovation forward, too.

It’s easy to admire the advances that so clearly improve lives: ever lower infant mortality, ever greater nutrition, fewer dying from deadly diseases. It’s harder to see that the larger system of innovation is built on the quest for comfort, for entertainment, for what often looks like decadence. But the long view reveals that an innovator’s immediate goals don’t matter as much as the system that promotes innovation in the first place.

Even if we give Khrushchev the benefit of the doubt and assume that he really did care about feeding the masses and satisfying the most basic human needs, it’s clear the Soviet premier had no idea how economic development works. Progress is not driven by producing ever more butter; it is driven by ice cream.


B.K. Marcus

B.K. Marcus is managing editor of the Freeman.

“Paid Family Leave” Is a Great Way to Hurt Women by Robert P. Murphy

In an article in the New Republic, Lauren Sandler argues that it’s about time the United States join the ranks of all other industrialized nations and provide legally guaranteed paid leave for pregnancy or illness.

Her arguments are similar to ones employed in the minimum wage debate. Opponents say that making particular workers more expensive will lead employers (on aggregate) to hire fewer of them. Supporters reject this tack as fearmongering, going so far as to claim such measures will boost profitability, and that only callous disregard for the disadvantaged can explain the opposition.

If paid leave (or higher pay for unskilled workers) helps workers and employers, then why do progressives need government power to force these great ideas on everyone?

The United States already has unpaid family leave, with the Family and Medical Leave Act (FMLA) signed into law by President Clinton in 1993. This legislation “entitles eligible employees … to take unpaid, job-protected leave for specified family and medical reasons with continuation of group health insurance coverage under the same terms and conditions as if the employee had not taken leave.” Specifically, the FMLA grants covered employees 12 workweeks of such protection in a 12-month period, to deal with a pregnancy, personal sickness, or the care of an immediate family member. (There is a provision for 26 workweeks if the injured family member is in the military.)

But “workers’ rights” advocates want to move beyond the FMLA, in winning legally guaranteed paid leave for such absences. Currently, California, New Jersey, and Rhode Island have such policies.

The basic libertarian argument against such legislation is simple enough: no worker has a right to any particular job, just as no employer has the right to compel a person to work for him or her. In a genuine market economy based on private property and consensual relations, employers and workers are legally treated as responsible adults to work out mutually beneficial arrangements. If it’s important to many women workers that they won’t forfeit their jobs in the event of a pregnancy, then in a free and wealthy society, many firms will provide such clauses in the employment contract in order to attract qualified applicants.

For example, if a 23-year-old woman with a fresh MBA is applying to several firms for a career in the financial sector, but she has a serious boyfriend and thinks they might one day start a family, then — other things equal — she is going to highly value a clause in the employment contract that guarantees she won’t lose her job if she takes off time to have a baby. Since female employment in the traditional workforce is now so prevalent, we can expect many employers to have such provisions in in their employment contracts in order to attract qualified applicants. Women don’t have a right to such clauses, just as male hedge-fund VPs don’t have a right to year-end bonuses, but it’s standard for employment contracts to have such features.

Leaving aside philosophical and ethical considerations, let’s consider basic economics and the consequences of pregnancy- and illness-leave legislation. It is undeniable that providing even unpaid, let alone paid, leave is a constraint on employers. Other things equal, an employer does not want an employee to suddenly not show up for work for months at a time, and then expect to come back as if nothing had happened. The employer has to scramble to deal with the absence in the meantime, and furthermore doesn’t want to pour too much training into a temporary employee because the original one is legally guaranteed her (or his) old job. If the employer also has to pay out thousands of dollars to an employee who is not showing up for work, it is obviously an extra burden.

As always with such topics, the easiest way to see the trade-off is to exaggerate the proposed measure. Suppose instead of merely guaranteeing a few months of paid maternity leave, instead the state enforced a rule that said, “Any female employee who becomes pregnant can take off up to 15 years, earning half of her salary, in order to deliver and homeschool the new child.” If that were the rule, then young female employees would be ticking time bombs, and potential employers would come up with all sorts of tricks to deny hiring them or to pay them very low salaries compared to their ostensible on-the-job productivity.

Now, just because guaranteed leave, whether paid or unpaid, is an expensive constraint for employers, that doesn’t mean such policies (in moderation) are necessarily bad business practices, so long as they are adopted voluntarily. To repeat, it is entirely possible that in a genuinely free market economy, many employers would voluntarily provide such policies in order to attract the most productive workers. After all, employers allow their employees to take bathroom breaks, eat lunch, and go on vacation, even though the employees aren’t generating revenue for the firm when doing so.

However, if the state must force employers to enact such policies, then we can be pretty sure they don’t make economic sense for the firms in question. In her article, Sandler addresses this fear by writing, in reference to New Jersey’s paid leave legislation,

After then-Governor Jon Corzine signed the bill, Chris Christie promised to overturn it during his campaign against Corzine. But Christie never followed through. The reason why is quite plain: As with California, most everyone loves paid leave. A recent study from the CEPR found that businesses, many of which strenuously opposed the policy, now believe paid leave has improved productivity and employee retention, decreasing turnover costs. (emphasis added)

Well, that’s fantastic! Rather than engaging in divisive political battles, why doesn’t Sandler simply email that CEPR (Center for Economic and Policy Research) study to every employer in the 47 states that currently lack paid leave legislation? Once they see that they are flushing money down the toilet right now with high turnover costs, they will join the ranks of the truly civilized nations and offer paid leave.

The quotation from Sandler is quite telling. Certain arguments for progressive legislation rely on “externalities,” where the profit-and-loss incentives facing individual consumers or firms do not yield the “socially optimal” behavior. On this issue of family leave, the progressive argument is much weaker. Sandler and other supporters must maintain that they know better than the owners of thousands of firms how to structure their employment contracts in order to boost productivity and employee retention. What are the chances of that?

In reality, given our current level of wealth and the configuration of our labor force, it makes sense for some firms to have generous “family leave” clauses for some employees, but it is not necessarily a sensible approach in all cases. The way a free society deals with such nuanced situations is to allow employers and employees to reach mutually beneficial agreements. If the state mandates an approach that makes employment more generous to women in certain dimensions — since they are the prime beneficiaries of pregnancy leave, even if men can ostensibly use it, too — then we can expect employers to reduce the attractiveness of employment contracts offered to women in other dimensions. There is no such thing as a free lunch. Mandating paid leave will reduce hiring opportunities and base pay, especially for women. If this trade-off is something the vast majority of employees want, then that’s the outcome a free labor market would have provided without a state mandate.


Robert P. Murphy

Robert P. Murphy is senior economist with the Institute for Energy Research. He is author of Choice: Cooperation, Enterprise, and Human Action (Independent Institute, 2015).

How Government Turned Baltimore into Pottersville by James Bovard

Baltimore’s recent riots are not surprising in a city that has long been plagued by both police brutality and one of the nation’s highest murder rates. Though numerous government policies and the rampaging looters deserve blame for the carnage, federal housing subsidies have long destabilized Baltimore neighborhoods and helped create a culture of violence with impunity.

Yet just last week, Baltimore officials were in Washington asking for more. Given the history, it defies understanding.

The U.S. Department of Housing and Urban Development was created in 1965, and Baltimore received massive subsidies to build housing projects in the following years. Baltimore’s projects, like those in many other cities, became cornucopias of crime.

One 202-unit sprawling Baltimore subsidized housing project (recently slated for razing) is known as “Murder Mall.” A 1979 HUD report noted that the robbery rate in one Baltimore public housing project was almost 20 times higher than the national average. The area in and around public housing often becomes “the territory of those who do not have to be afraid — the criminals,” the report said. Baltimore Mayor Kurt Schmoke in 1993 blamed maintenance problems at one public housing projects on drug dealers who refused to let city workers enter the buildings.

In the 1990s, the Baltimore Housing Authority began collecting lavish HUD subsidies to demolish public housing projects. But critics complained that HUD was merely replacing “vertical ghettos with horizontal ones.” Baltimore was among the first cities targeted for using Section 8 vouchers to disperse public housing residents.

HUD and the city housing agency presumed that simply moving people out of the projects was all that was necessary to end the criminal behavior of the residents. Baltimore was one of five cities chosen for a HUD demonstration project — Moving to Opportunity (MTO) — to show how Section 8 could solve the problems of the underclass.

But the relocations had “tripled the rate of arrests for property crimes” among boys who moved to new locales via Section 8. A study published last year in the Journal of the American Medical Association reported that boys in Section 8 households who moved to new neighborhoods were three times more likely to suffer post-traumatic stress disorder and behavioral problems than boys in the control group.

A 2009 research project on Section 8 published in Homicide Studies noted that in the one city studied, “Crime, specifically homicide, became displaced to where the low-income residents were relocated. Homicide was simply moved to a new location, not eliminated.”

Ed Rutkowski, head of a community development corporation in one marginal Baltimore neighborhood, labeled Section 8 “a catalyst in neighborhood deterioration and ghetto expansion” in 2003.

Regardless of its collateral damage, Section 8 defines Valhalla for many Baltimoreans. Receiving a Section 8 voucher can enable some recipients to live rent-free in perpetuity. Because recipients must pay up to a third of their income for rent under the program, collecting Section 8 sharply decreases work effort, according to numerous economic studies.

Last October, when the local housing agency briefly allowed people to register for the program, it was deluged with 73,509 applications. Most of the applications were from families — which means that a third of Baltimore’s 241,455 households sought housing welfare. (Almost 10% of Baltimoreans are already on the housing dole.) Section 8 is not an entitlement, so the city will select fewer than 10,000“winners” from the list.

HUD’s Federal Housing Administration also has a long history of destabilizing neighborhoods in Baltimore and other big cities. A HUD subsidized mortgage program for low-income borrowers launched in 1968 spurred so many defaults and devastation that Carl Levin, then Detroit City Council president and later a long-term U.S. senator, derided the program in 1976 as “Hurricane HUD.

In the late 1990s, more than 20% of FHA mortgages in some Baltimore neighborhoods were in default — leading one activist to label Baltimore “the foreclosure capital of the world.” HUD Inspector General Susan Gaffney warned in 2000: “Vacant, boarded-up HUD-owned homes have a negative effect on neighborhoods, and the negative effect magnifies the longer the properties remain in HUD’s inventory.”

The feds continued massive negligent mortgage lending in Baltimore after that crisis, creating fresh havoc in recent years. In late 2013, more than 40% of homes in the low-income Carrollton Ridge neighborhood were underwater. Reckless subsidized lending in Baltimore and other low-income areas helped saddle Maryland with the highest foreclosure rate in the nation by the end of last year. One in every 435 housing units in Baltimore was in foreclosure last October, according to RealtyTrac.

President Obama said the Baltimore riots showed the need for new “massive investments in urban communities.” What Baltimore needs is an investment in new thinking. The highest property taxes in the state and oppressive local regulation often make investing in jobs and businesses in Baltimore unprofitable. Only fixing that will produce a stable community. Shoveling more federal money into the city is the triumph of hope over experience.

James Bovard

James Bovard is the author of Public Policy Hooligan. His work has appeared in USA Today, where this article was first published.

Who Should Choose? Patients and Doctors or the FDA? by Doug Bandow

Good ideas in Congress rarely have a chance. Rep. Fred Upton (R-Mich.) is sponsoring legislation to speed drug approvals, but his initial plan was largely gutted before he introduced it last month.

Congress created the Food and Drug Administration in 1906, long before prescription drugs became such an important medical treatment. The agency became an omnibus regulatory agency, controlling everything from food to cosmetics to vitamins to pharmaceuticals. Birth defects caused by the drug Thalidomide led to the 1962 Kefauver-Harris Amendments which vastly expanded the FDA’s powers. The new controls did little to improve patient safety but dramatically slowed pharmaceutical approvals.

Those who benefit the most from drugs often complain about the cost since pills aren’t expensive to make. However, drug discovery is an uncertain process. Companies consider between 5,000 and 10,000 substances for every one that ends up in the pharmacy. Of those only one-fifth actually makes money—and must pay for the entire development, testing, and marketing processes.

As a result, the average per drug cost exceeds $1 billion, most often thought to be between $1.2 and $1.5 billion. Some estimates run more.

Naturally, the FDA insists that its expensive regulations are worth it. While the agency undoubtedly prevents some bad pharmaceuticals from getting to market, it delays or blocks far more good products.

Unfortunately, the political process encourages the agency to kill with kindness. Let a drug through which causes the slightest problem, and you can expect television special reports, awful newspaper headlines, and congressional hearings. Stop a good drug and virtually no one notices.

It took the onset of AIDS, then a death sentence, to force the FDA to speed up its glacial approval process. No one has generated equivalent pressure since. Admitted Richard Merrill, the agency’s former chief counsel:  “No FDA official has ever been publicly criticized for refusing to allow the marketing of a drug.”

By 1967 the average delay in winning approval of a new drug had risen from seven to 30 months after the passage of Kefauver-Harris. Approval time now is estimated to run as much as 20 years.

While economist Sam Peltzman figured that the number of new drugs approved dropped in half after Kefauver-Harris, there was no equivalent fall in the introduction of ineffective or unsafe pharmaceuticals. All the Congress managed to do was strain out potentially life-saving products.

After all, a company won’t make money selling a medicine that doesn’t work. And putting out something dangerous is a fiscal disaster. Observed Peltzman:  the “penalties imposed by the marketplace on sellers of ineffective drugs prior to 1962 seem to have been enough of a deterrent to have left little room for improvement by a regulatory agency.”

Alas, the FDA increases the cost of all medicines, delays the introduction of most pharmaceuticals, and prevents some from reaching the market. That means patients suffer and even die needlessly.

The bureaucracy’s unduly restrictive approach plays out in other bizarre ways. Once a drug is approved doctors may prescribe it for any purpose, but companies often refuse to go through the entire process again to win official okay for another use. Thus, it is common for AIDS, cancer, and pediatric patients to receive off-label prescriptions. However, companies cannot advertise these safe, effective, beneficial uses.

Congress has applied a few bandages over the years. One was to create a process of user fees through the Prescription Drug User Fee Act. Four economists, Tomas Philipson, Ernst Berndt, Adrian Gottschalk, and Matthew Strobeck, figured that drugmakers gained between $11 billion and $13 billion and consumers between $5 billion and $19 billion. Total life years saved ranged between 180,000 and 310,000. But lives continue to be lost because the approval process has not been accelerated further.

Criticism and pressure did lead to creation of a special FDA procedure for “Accelerated Approval” of drugs aimed at life-threatening conditions. This change, too, remains inadequate. Nature Biotechnology noted that few medicines qualified and “in recent years, FDA has been ratcheting up the requirements.”

The gravely ill seek “compassionate access” to experimental drugs. Some patients head overseas unapproved treatments are available. The Wall Street Journal reported on those suffering from Lou Gehrig’s disease who, “frustrated by the slow pace of clinical drug trials or unable to qualify, are trying to brew their own version of an experimental compound at home and testing it on themselves.”

Overall, far more people die from no drugs than from bad drugs. Most pharmaceutical problems involve doctors misprescribing or patients misusing medicines. The deadliest pre-1962 episode involved Elixir Sulfanilamide and killed 107 people. (Thalidomide caused some 10,000 birth defects, but no deaths.) Around 3500 users died from Isoproterenol, an asthmatic inhaler. Vioxx was blamed for a similar number of deaths, though the claim was disputed. Most of the more recent incidents would not have been prevented from a stricter approval process.

The death toll from agency delays is much greater. Drug analyst Dale Gieringer explained:  “The benefits of FDA regulation relative to that in foreign countries could reasonably be put at some 5,000 casualties per decade or 10,000 per decade for worst-case scenarios.  In comparison … the cost of FDA delay can be estimated at anywhere from 21,000 to 120,000 lives per decade.”

According to the Competitive Enterprise Institute, among the important medicines delayed were ancrod, beta-blockers, citicoline, ethyol, femara, glucophage, interleukin-2, navelbine, lamictal, omnicath, panorex, photofrin, prostar, rilutek, taxotere, transform, and vasoseal.

Fundamental reform is necessary. The FDA should be limited to assessing safety, with the judgment as to efficacy left to the marketplace. Moreover, the agency should be stripped of its approval monopoly. As a start drugs approved by other industrialized states should be available in America.

The FDA’s opinion also should be made advisory. Patients and their health care providers could look to private certification organizations, which today are involved in everything from building codes to electrical products to kosher food. Medical organizations already maintain pharmaceutical databases and set standards for treatments with drugs. They could move into drug testing and assessment.

No doubt, some people would make mistakes. But they do so today. With more options more people’s needs would be better met. Often there is no single correct treatment decision. Ultimately the patient’s preference should control.

Congress is arguing over regulatory minutiae when it should be debating the much more basic question: Who should decide who gets treated how? Today the answer is Uncle Sam. Tomorrow the answer should be all of us.

Doug Bandow

Doug Bandow is a senior fellow at the Cato Institute and the author of a number of books on economics and politics. He writes regularly on military non-interventionism.

Health Insurance Is Illegal by Warren C. Gibson

Health insurance is a crime. No, I’m not using a metaphor. I’m not saying it’s a mess, though it certainly is that. I’m saying it’s illegal to offer real health insurance in America. To see why, we need to understand what real insurance is and differentiate that from what we currently have.

Real insurance

Life is risky. When we pool our risks with others through insurance policies, we reduce the financial impact of unforeseen accidents or illness or premature death in return for a premium we willingly pay. I don’t regret the money I’ve spent on auto insurance during my first 55 years of driving, even though I’ve yet to file a claim.

Insurance originated among affinity groups such as churches or labor unions, but now most insurance is provided by large firms with economies of scale, some organized for profit and some not. Through trial and error, these companies have learned to reduce the problems of adverse selection and moral hazard to manageable levels.

A key word above is unforeseen.

If some circumstance is known, it’s not a risk and therefore cannot be the subject of genuine risk-pooling insurance. That’s why, prior to Obamacare, some insurance companies insisted that applicants share information about their physical condition. Those with preexisting conditions were turned down, invited to high-risk pools, or offered policies with higher premiums and higher deductibles.

Insurers are now forbidden to reject applicants due to preexisting conditions or to charge them higher rates.

They are also forbidden from charging different rates due to different health conditions — and from offering plans that exclude certain coverage items, many of which are not “unforeseen.”

In other words, it’s illegal to offer real health insurance.

Word games

Is all this just semantics? Not at all. What currently passes for health insurance in America is really just prepaid health care — on a kind of all-you-can-consume buffet card. The system is a series of cost-shifting schemes stitched together by various special interests. There is no price transparency. The resulting overconsumption makes premiums skyrocket, and health resources get misallocated relative to genuine wants and needs.

Lessons

Some lessons here are that genuine health insurance would offer enormous cost savings to ordinary people — and genuine benefits to policyholders. These plans would encourage thrift and consumer wisdom in health care planning,  while discouraging the overconsumption that makes prepaid health care unaffordable.

At this point, critics will object that private health insurance is a market failure because the refusal of unregulated private companies to insure preexisting conditions is a serious problem that can only be remedied by government coercion. The trouble with such claims is that no one knows what a real health insurance market would generate, particularly as the pre-Obamacare regime wasn’t anything close to being free.

What might a real, free-market health plan look like?

  • People would be able to buy less expensive plans from anywhere, particularly across state lines.
  • People would be able to buy catastrophic plans (real insurance) and set aside much more in tax-deferred medical savings accounts to use on out-of-pocket care.
  • People would very likely be able to buy noncancelable, portable policies to cover all unforeseen illnesses over the policyholder’s lifetime.
  • People would be able to leave costly coverage items off their policies — such as chiropractic or mental health — so that they could enjoy more affordable premiums.
  • People would not be encouraged by the tax code to get insurance through their employer.

What about babies born with serious conditions? Parents could buy policies to cover such problems prior to conception. What about parents whose genes predispose them to produce disabled offspring? They might have to pay more.

Of course, there will always be those who cannot or do not, for one reason or another, take such precautions. There is still a huge reservoir of charitable impulses and institutions in this country that could offer assistance. And these civil society organizations would be far more robust in a freer health care market.

The enemy of the good

Are these perfect solutions? By no means. Perfection is not possible, but market solutions compare very favorably to government solutions, especially over longer periods. Obamacare will continue to bring us unaccountable bureaucracies, shortages, rationing, discouraged doctors, and more.

Some imagine that prior to Obamacare, we had a free-market health insurance system, but the system was already severely hobbled by restrictions.

To name a few:

  • It was illegal to offer policies across state lines, which suppressed choices and increased prices, essentially cartelizing health insurance by state.
  • Employers were (and still are) given a tax break for providing health insurance (but not auto insurance) to their employees, reducing the incentive for covered employees to economize on health care while driving up prices for individual buyers. People stayed locked in jobs out of fear of losing health policies.
  • State regulators forbade policies that excluded certain coverage items, even if policyholders were amenable to such plans.
  • Many states made it illegal to price discriminate based on health status.
  • The law forbade associated health plans, which would allow organizations like churches or civic groups to pool risk and offer alternatives.
  • Medicaid and Medicare made up half of the health care system.

Of course, Obamacare fixed none of these problems.

Many voices are calling for the repeal of Obamacare, but few of those voices are offering the only solution that will work in the long term: complete separation of state and health care. That means no insurance regulation, no medical licensing, and ultimately, the abolition of Medicare and Medicaid, which threaten to wash future federal budgets in a sea of red ink.

Meanwhile, anything resembling real health insurance is illegal. And if you tried to offer it, they might throw you in jail.

Warren C. Gibson

Warren Gibson teaches engineering at Santa Clara University and economics at San Jose State University.

Israel Puts Price Controls on Books, Sales Plummet

A lesson on the terrible consequences of price controls comes from Israel this week, the Blaze reports:

A new Israeli law controlling the price of books and mandating guaranteed minimum compensation for writers has had the complete opposite effect of what lawmakers had intended. . . .

Under the new law’s dictates, any new book that’s been on the shelf 18 months or less may not be discounted. During the same time period, Israeli authors are guaranteed to earn a minimum of 8 percent of the price of the first 6,000 books sold and 10 percent of all subsequent books sold, the Jerusalem Post explained last year.

The results were swift and predictable:

Publishers told Haaretz that the law “has upset the entire literary food chain” with sales of new book titles down between 40 and 60 percent and down 20 percent for books overall. . . . Booksellers say they’ve experienced a 25 percent drop in children’s book sales in just one year, according to Channel 2.

The combination of price controls on books and minimum wages for authors has had pronounced effects on new, young, and unestablished writers:

Publishers have been hesitant to bank on new writers under the government mandate, because they don’t want to take the financial risk on books they’re not allowed to put on sale. And from a consumer perspective, those looking for new books are less likely to drop some $25 on the debut novel of a writer they’ve never heard of.

“Almost the only way for unknown writers to become popular is to put their first book on sale, even to give it for free if possible, to publicize their name and get their audience and eventually make money from their writing,” [Boaz] Arad said. Thus the new law has been particularly devastating on new authors who can’t get their work to the public.

Arad, chief of the Ayn Rand Center-Israel, said that the parliament blithely ignored the fates of similar laws in Europe, telling the Blaze, “It’s no surprise that we face a book market struggling and suffering and it’s the most unbecoming situation for the ‘People of the Book.’”

Good intentions fail to trump the laws of supply and demand once again.

To “protect” authors, the government has driven off readers.

Read more coverage of the story here.

Anything Peaceful

Anything Peaceful is FEE’s new online ideas marketplace, hosting original and aggregate content from across the Web.

Razing the Bar: The bar exam protects a cartel of lawyers, not their clients by Allen Mendenhall

The bar exam was designed and continues to operate as a mechanism for excluding the lower classes from participation in the legal services market. Elizabeth Olson of the New York Times reports that the bar exam as a professional standard “is facing a new round of scrutiny — not just from the test takers but from law school deans and some state legal establishments.”

This is a welcome development.

Testing what, exactly?

The dean of the University of San Diego School of Law, Stephen C. Ferrulo, complains to the Times that the bar exam “is an unpredictable and unacceptable impediment for accessibility to the legal profession.” Ferrulo is right: the bar exam is a barrier to entry, a form of occupational licensure that restricts access to a particular vocation and reduces market competition.

The bar exam tests the ability to take tests, not the ability to practice law. The best way to learn the legal profession is through tried experience and practical training, which, under our current system, are delayed for years, first by the requirement that would-be lawyers graduate from accredited law schools and second by the bar exam and its accompanying exam for professional fitness.

Freedom of contract

The 19th-century libertarian writer Lysander Spooner, himself a lawyer, opposed occupational licensure as a violation of the freedom of contract, arguing that, once memorialized, all agreements between mutually consenting parties “should not be subjects of legislative caprice or discretion.”

“Men may exercise at discretion their natural rights to enter into all contracts whatsoever that are in their nature obligatory,” he wrote, adding that this principle would prohibit all laws “forbidding men to make contracts by auction without license.”

In more recent decades, Milton Friedman disparaged occupational licensure as “another example of governmentally created and supported monopoly on the state level.” For Friedman, occupational licensure was no small matter. “The overthrow of the medieval guild system,” he said, was an indispensable early step in the rise of freedom in the Western world. It was a sign of the triumph of liberal ideas.… In more recent decades, there has been a retrogression, an increasing tendency for particular occupations to be restricted to individuals licensed to practice them by the state.

The bar exam is one of the most notorious examples of this “increasing tendency.”

Protecting lawyers from the poor

The burden of the bar exam falls disproportionately on low-income earners and ethnic minorities who lack the ability to pay for law school or to assume heavy debts to earn a law degree. Passing a bar exam requires expensive bar-exam study courses and exam fees, to say nothing of the costly applications and paperwork that must be completed in order to be eligible to sit for the exam. The average student-loan debt for graduates of many American law schools now exceeds $150,000, while half of all lawyers make less than $62,000 per year, a significant drop since a decade ago.

Recent law-school graduates do not have the privilege of reducing this debt after they receive their diploma; they must first spend three to four months studying for a bar exam and then, having taken the exam, must wait another three to four months for their exam results. More than half a year is lost on spending and waiting rather than earning, or at least earning the salary of a licensed attorney (some graduates work under the direction of lawyers pending the results of their bar exam).

When an individual learns that he or she has passed the bar exam, the congratulations begin with an invitation to pay a licensing fee and, in some states, a fee for a mandatory legal-education course for newly admitted attorneys. These fees must be paid before the individual can begin practicing law.

The exam is working — but for whom?

What’s most disturbing about this system is that it works precisely as it was designed to operate.  State bar associations and bar exams are products of big-city politics during the Progressive Era. Such exams existed long before the Progressive Era — Delaware’s bar exam dates back to 1763 — but not until the Progressive Era were they increasingly formalized and institutionalized and backed by the enforcement power of various states.

Threatened by immigrant workers and entrepreneurs who were determined to earn their way out of poverty and obscurity, lawyers with connections to high-level government officials in their states sought to form guilds to prohibit advertising and contingency fees and other creative methods for gaining clients and driving down the costs of legal services. Establishment lawyers felt the entrepreneurial up-and-comers were demeaning the profession and degrading the reputation of lawyers by transforming the practice of law into a business industry that admitted ethnic minorities and others who lacked rank and class. Implementing the bar exam allowed these lawyers to keep allegedly unsavory people and practices out of the legal community and to maintain the high costs of fees and services.

Protecting the consumer

In light of this ugly history, the paternalistic response of Erica Moeser to the New York Times is particularly disheartening. Moeser is the president of the National Conference of Bar Examiners. She says that the bar exam is “a basic test of fundamentals” that is justified by “protecting the consumer.” But isn’t it the consumer above all who is harmed by the high costs of legal services that are a net result of the bar exam and other anticompetitive practices among lawyers? To ask the question is to answer it. It’s also unclear how memorizing often-archaic rules to prepare for standardized, high-stakes multiple-choice tests that are administered under stressful conditions will in any way improve one’s ability to competently practice law.

The legal community and consumers of legal services would be better served by the apprenticeship model that prevailed long before the rise of the bar exam. Under this model, an aspiring attorney was tutored by experienced lawyers until he or she mastered the basics and demonstrated his or her readiness to represent clients. The high cost of law school was not a precondition; young people spent their most energetic years doing real work and gaining practical knowledge. Developing attorneys had to establish a good reputation and keep their costs and fees to a minimum to attract clients, gain trust, and maintain a living.

The rise in technology and social connectivity in our present era also means that reputation markets have improved since the early 20th century, when consumers would have had a more difficult time learning by word-of-mouth and secondhand report that one lawyer or group of lawyers consistently failed their clients — or ripped them off. Today, with services like Amazon, eBay, Uber, and Airbnb, consumers are accustomed to evaluating products and service providers online and for wide audiences.  Learning about lawyers’ professional reputations should be quick and easy, a matter of a simple Internet search.  With no bar exam, the sheer ubiquity and immediacy of reputation markets could weed out the good lawyers from the bad, thereby transferring the mode of social control from the legal cartel to the consumers themselves.

Criticism of the high costs of legal bills has not gone away in recent years, despite the drop in lawyers’ salaries and the saturation of the legal market with too many attorneys. The quickest and easiest step toward reducing legal costs is to eliminate bar exams. The public would see no marked difference in the quality of legal services if the bar exam were eliminated, because, among other things, the bar exam doesn’t teach or test how to deliver those legal services effectively.

It will take more than just the grumbling of anxious, aspiring attorneys to end bar-exam hazing rituals. That law school deans are realizing the drawbacks of the bar exam is a step in the right direction. But it will require protests from outside the legal community — from the consumers of legal services — to effect any meaningful change.

Allen Mendenhall

Allen Mendenhall is the author of Literature and Liberty: Essays in Libertarian Literary Criticism (Rowman & Littlefield / Lexington Books, 2014). Visit his website at AllenMendenhall.com.

Why Socialism Causes Pollution by THOMAS J. DILORENZO

Corporations are often accused of despoiling the environment in their quest for profit. Free enterprise is supposedly incompatible with environmental preservation, so that government regulation is required.

Such thinking is the basis for current proposals to expand environmental regulation greatly. So many new controls have been proposed and enacted that the late economic journalist Warren Brookes once forecast that the U.S. Environmental Protection Agency (EPA) could well become “the most powerful government agency on earth, involved in massive levels of economic, social, scientific, and political spending and interference.

But if the profit motive is the primary cause of pollution, one would not expect to find much pollution in socialist countries, such as the former Soviet Union, China, and in the former Communist countries of Eastern and Central Europe. That is, in theory. In reality exactly the opposite is true: The socialist world suffers from the worst pollution on earth. Could it be that free enterprise is not so incompatible with environmental protection after all?

I. Socialist Pollution

The Soviet Union

In the Soviet Union there was a vast body of environmental law and regulation that purportedly protected the public interest, but these constraints have had no perceivable benefit. The Soviet Union, like all socialist countries, suffered from a massive “tragedy of the commons,” to borrow the term used by biologist Garrett Hardin in his classic 1968 article. Where property is communally or governmentally owned and treated as a free resource, resources will inevitably be overused with little regard for future consequences.

The Soviet government’s imperatives for economic growth, combined with communal ownership of virtually all property and resources, caused tremendous environmental damage. According to economist Marshall Goldman, who studied and traveled extensively in the Soviet Union, “The attitude that nature is there to be exploited by man is the very essence of the Soviet production ethic.”

A typical example of the environmental damage caused by the Soviet economic system is the exploitation of the Black Sea. To comply with five-year plans for housing and building construction, gravel, sand, and trees around the beaches were used for decades as construction materials. Because there is no private property, “no value is attached to the gravel along the seashore. Since, in effect, it is free, the contractors haul it away. This practice caused massive beach erosion which reduced the Black Sea coast by 50 percent between 1920 and 1960. Eventually, hotels, hospitals, and of all things, a military sanitarium collapsed into the sea as the shoreline gave way. Frequent landslides–as many as 300 per year–have been reported.

Water pollution is catastrophic. Effluent from a chemical plant killed almost all the fish in the Oka River in 1965, and similar fish kills have occurred in the Volga, Ob, Yenesei, Ural, and Northern Dvina rivers. Most Russian factories discharge their waste without cleaning it at all. Mines, oil wells, and ships freely dump waste and ballast into any available body of water, since it is all one big (and tragic) “commons.”

Only six of the 20 main cities in Moldavia had a sewer system by the late 1960s, and only two of those cities made any effort to treat the sewage. Conditions are far more primitive in the countryside.

The Aral and Caspian seas have been gradually disappearing as large quantities of their water have been diverted for irrigation. And since untreated sewage flows into feeder rivers, they are also heavily polluted.

Some Soviet authorities expressed fears that by the turn of the century the Aral Sea will be nothing but a salt marsh. One paper reported that because of the rising salt content of the Aral the remaining fish will rapidly disappear. It was recently revealed that the Aral Sea has shrunk by about a third. Its shore line “is arid desert and the wind blows dry deposits of salt thousands of miles away. The infant mortality rate [in that region] is four to five times the national average.”

The declining water level in the Caspian Sea has been catastrophic for its fish population as spawning areas have turned into dry land. The sturgeon population has been so decimated that the Soviets have experimented with producing artificial caviar. Hundreds of factories and refineries along the Caspian Sea dump untreated waste into the sea, and major cities routinely dump raw sewage. It has been estimated that one-half of all the discharged effluent is carried in the Volga River, which flows into the Caspian Sea. The concentration of oil in the Volga is so great that steamboats are equipped with signs forbidding passengers to toss cigarettes overboard. As might be expected, fish kills along the Volga are a “common calamity.”

Lake Baikal, which is believed to be the oldest freshwater lake in the world, is also one of the largest and deepest. It is five times as deep as Lake Superior and contains twice the volume of water. According to Marshall Goldman, it was also “the best known example of the misuse of water resources in the USSR.”

Factories and pulp mills have been dumping hundreds of millions of gallons of effluent into Lake Baikal each year for decades. As a result, animal life in the lake has been cut by more than 50 percent over the past half century. Untreated sewage is dumped into virtually all tributaries to the lake.

Islands of alkaline sewage have been observed floating on the lake, including one that was 18 miles long and three miles wide. These “islands” have polluted the air around the lake as well as the water in it. Thousands of acres of forest surrounding the lake have been denuded, causing such erosion that dust storms have been reported. So much forest land in the Lake Baikal region has been destroyed that some observers reported shifting sands that link up with the Gobi Desert; there are fears that the desert may sweep into Siberia and destroy the lake.

In other regions the fact that no compensation has to be paid for land that is flooded by water projects has made it easy for government engineers to submerge large areas of land. “As much land has been lost through flooding and salination as has been added through irrigation and drainage in the Soviet Union.”

These examples of environment degradation in the Soviet Union are not meant to be exhaustive but to illustrate the phenomenon of Communist pollution. As Goldman has observed, the great pollution problems in Russia stem from the fact that the government determined that economic growth was to be pursued at any cost. “Government officials in the USSR generally have a greater willingness to sacrifice their environment than government officials in a society with private enterprise where there is a degree of public accountability. There is virtually a political as well as an economic imperative to devour idle resources in the USSR.”

China

In China, as in Russia, putting the government in charge of resource allocation has not had desirable environmental consequences. Information on the state of China’s environment is not encouraging.

According to the Worldwatch Institute, more than 90 percent of the trees in the pine forests in China’s Sichuan province have died because of air pollution. In Chungking, the biggest city in southwest China, a 4, 500-acre forest has been reduced by half. Acid rain has reportedly caused massive crop losses.

There also have been reports of waterworks and landfill projects severely hampering fish migration. Fish breeding was so seriously neglected that fish has largely vanished from the national diet. Depletion of government-owned forests has turned them into deserts, and millions of acres of grazing and farm land in the northern Chinese plains were made alkaline and unproductive during the “Great Leap Forward.”

Central and Eastern Europe

With Communism’s collapse, word has begun to seep out about Eastern Europe’s environmental disasters. According to the United Nations Global Environment Monitoring Program, “pollution in that region is among the worst on the Earth’s surface.” Jeffrey Leonard of the World Wildlife Fund concluded that “pollution was part and parcel of the system that molested the people [of Eastern Europe] in their daily lives.” Evidence is mounting of “an environmental nightmare,” the legacy of “decades of industrial development with little or no environmental control.”

Poland

According to the Polish Academy of Sciences, “a third of the nation’s 38 million people live in areas of ecological disaster.” In the heavily industrialized Katowice region of Poland, the people suffer 15 percent more circulatory disease, 30 percent more tumors, and 47 percent more respiratory disease than other Poles. Physicians and scientists believe pollution is a major contributor to these health problems.

Acid rain has so corroded railroad tracks that trains are not allowed to exceed 24 miles an hour. The air is so polluted in Katowice that there are underground “clinics” in uranium mines where the chronically ill can go to breathe clean air.

Continuous pumping of water from coal mines has caused so much land to subside that over 300,000 apartments were destroyed as buildings collapsed. The mine sludge has been pumped into rivers and streams along with untreated sewage which has made 95 percent of the water unfit for human consumption. More than 65 percent of the nation’s water is even unfit for industrial use because it is so toxic that it would destroy heavy metals used by industry. In Cracow, Poland’s ancient capital, acid rain “dissolved so much of the gold roof of the 16th century Sigismund Chapel that it recently hd to be replaced.”

Industrial dust rains down on towns, depositing cadmium, lead, zinc, and iron. The dust is so heavy that huge trucks drive through city streets daily spraying water to reduce it. By some accounts eight tons of dust fall on each square mile in and around Cracow each year. The mayor of Cracow recently stated that the Vistula River — the largest river in Poland — is “nothing but a sewage canal.” The river has mercury levels that are three times what researchers say is safe, while lead levels are 25 times higher than deemed safe.

Half of Poland’s cities, including Warsaw, don’t even treat their wastes, and 41 animal species have reportedly become extinct in Poland in recent years. While health statistics are spotty — they were not a priority of the Communist government–available data are alarming. A recent study of the Katowice region found that 21 percent of the children up to 4 years old are sick almost constantly, while 41 percent of the children under 6 have serious health problems.

Life expectancy for men is lower than it was 20 years ago. In Upper Silesia, which is considered one of the most heavily industrialized regions in the world, circulatory disease levels are 15 percent higher, respiratory disease is 47 percent higher, and there has been “an appalling increase in the number of retarded children,” according to the Polish Academy of Sciences. Although pollution cannot be blamed for all these health problems, physicians and scientists attach much of the blame to this source.

Czechoslovakia

In a speech given on New Year’s Day of 1990, Czechoslovakian President Vaclav Havel said, “We have laid waste to our soil and the rivers and the forests…and we have the worst environment in the whole of Europe today.” He was not exaggerating, although the competition for the title of “worst environment” is clearly fierce. Sulfur dioxide concentrations in Czechoslovakia are eight times higher than in the United Sates, and “half the forests are dead or dying.”

Because of the overuse of fertilizers, farmland in some areas of Czechoslovakia is toxic to more than one foot in depth. In Bohemia, in northwestern Czechoslovakia, hills stand bare because their vegetation has died in air so foul it can be tasted. One report describes the Czech countryside as a place where “barren plateaus stretch for miles, studded with the stumps and skeletons of pine trees. Under the snow lie thousands of acres of poisoned ground, where for centuries thick forests had grown.” There is a stretch of over 350 miles where more than 300,000 acres of forest have disappeared and the remaining trees are dying. A thick, brown haze hangs over much of northern Czechoslovakia for about eight months of the year. Sometimes it takes on the sting of tear gas, according to local officials. There are environmental laws, but they aren’t enforced. Sulfur in the air has been reported at 20 times the permissible level. Soil in some regions is so acidic that aluminum trapped in the clay is released. Scientists discovered that the aluminum has poisoned groundwater, killing tree and plant roots and filtering into the drinking water.

Severe erosion in the decimated forests has caused spring floods in which all the melted snow cascades down mountainsides in a few weeks, causing further erosion and leading to water shortages in the summer.

In its search for coal, the Communist government has used bulldozers on such a massive scale that they have “turned towns, farms and woodlands into coarse brown deserts and gaping hollows. Because open-pit mining is cheaper than underground mining, and has been practiced extensively, in some areas of Czechoslovakia “you have total devastation of the land.”

East Germany

The new German government has claimed that nearly 40 percent of the East German populace suffers ill effects from pollutants in the air. In Leipzig, half the children are treated each year for illnesses believed to be associated with air pollution. Eighty percent of eastern Germany’s surface waters are classified as unsuitable for fishing, sports, or drinking, and one out of three lakes has been declared biologically dead because of decades of untreated dumping of chemical waste.

Much of the East German landscape has been devastated. Fifteen to 20 percent of its forests are dead, and another 40 percent are said to be dying. Between 1960 and 1980 at least 70 villages were destroyed and their inhabitants uprooted by the government, which wanted to mine high-sulfur brown coal. The countryside is now “pitted with moon-like craters” and “laced with the remains of what were once spruce and pine trees, nestled amid clouds of rancid smog.” The air in some cities is so polluted that residents use their car headlights during the day, and visitors have been known to vomit from breathing the air.

Nearly identical problems exist in Bulgaria, Hungary, Romania, and Yugoslavia.

Visiting scientists have concluded that pollution in Central and Eastern Europe “is more dangerous and widespread than anything they have seen in the Western industrial nations.”

II. United States: Public Sector Pollution

The last refuge of those who advocate socialistic solutions to environmental pollution is the claim that it is the lack of democratic processes that prevents the Communist nations from truly serving the public interest. If this theory is correct, then the public sector of an established democracy such as the United States should be one of the best examples of environmental responsibility. But U.S. government agencies are among the most cavalier when it comes to environmental stewardship.

There is much evidence to dispute the theory that only private businesses pollute. In the United States, we need look no further than our own government agencies. These public sector institutions, such as the Department of Defense (DOD), are among the worst offenders. DOD now generates more than 400,000 tons of hazardous waste a year — more than is produced by the five largest chemical companies combined. To make matters worse, the Environmental Protection Agency lacks the enforcement power over the public sector that it possesses over the private sector.

The lax situation uncovered by the General Accounting Office (GAO) at Tinker Air Force Base in Oklahoma is typical of the way in which many Federal agencies respond to the EPA’s directives. “Although DOD policy calls for the military services to … implement EPA’s hazardous waste management regulations, we found that Tinker has been selling…waste oil, fuels, and solvents rather than recycling,” reported the GAO.

One of the world’s most poisonous spots lies about 10 miles northeast of Denver in the Army’s Rocky Mountain Arsenal. Nerve gas, mustard shells, the anti-crop spray TX, and incendiary devices have been dumped into pits there over the past 40 years. Dealing with only one “basin” of this dump cost $40 million. Six hundred thousand cubic yards of contaminated soil and sludge had to be scraped and entombed in a 16-acre, double-lined waste pile.

There are plenty of other examples of Defense Department facilities that need major cleanup. In fact, total costs of along-term Pentagon cleanup are hard to get a handle on. Some officials have conceded that the price tag could eventually exceed $20 billion.

Government-owned power plants are another example of public-sector pollution. These plants are a large source of sulfur dioxide emissions. The federal government’s Tennessee Valley Authority operates 59 coal-fired power plants in the Southeast, where it has had major legal confrontations with state governments who want the Federal agency to comply with state governments who want the Federal agency to comply with state environmental regulations. The TVA has fought the state governments for years over compliance with their clean air standards. It won a major Supreme Court victory when the Court ruled that, as a federal government enterprise, it could be exempt from environmental regulations with which private sector and local government power plants must comply.

Federal agricultural policy also has been a large source of pollution, in the past encouraging over utilization of land subject to erosion. Powerful farm lobbies have protected “non-point” sources of pollution from the heavy hand of regulation places on other private industries.

III. Policy Implications

These examples of environmental degradation throughout the world suggest some valuable lessons. First, it is not free enterprise per se that causes environmental harm; if so, the socialist world would be environmentally pristine.

The heart of the problem lies with the failure of our legal institutions, not the free enterprise system. Specifically, American laws were weakened more than a century ago by Progressive Era courts that believed economic progress was in the public interest and should therefore supersede individual rights.

The English common law tradition of the protection of private property rights — including the right to be free from pollution — was slowly overturned. In other words, many environmental problems are not caused by “market failure” but by government’s failure to enforce property rights. It is a travesty of justice when downstream residents, for example, cannot hold an upstream polluter responsible for damaging their properties. The common law tradition must be revived if we are to enjoy a healthy market economy and a cleaner environment. Potential polluters must know in advance that they will be held responsible for their actions.

The second lesson is that the plundering of the environment in the socialist world is a grand example of the tragedy of the commons. Under communal property ownership, where no one owns or is responsible for a natural resource, the inclination is for each individual to abuse or deplete the resource before someone else does. Common examples of this “tragedy” are how people litter public streets and parks much more than their own yards; private housing is much better maintained than public lands but maintain lush pastures on their own property; the national forests are carelessly over-logged, but private forests are carefully managed and reforested by lumber companies with “super trees”; and game fish are habitually overfished in public waterways but thrive in private lakes and streams. The tragedy of the commons is a lesson for those who believe that further nationalization and governmental control of natural resources is a solution to our environmental problems.

These two pillars of free enterprise — sound liability laws that hold people responsible for actions and the enforcement of private property rights — are important stepping stones to environmental protection.

ABOUT THOMAS J. DILORENZO

EDITORS NOTE: The featured image is courtesy of FEE and Shutterstock.

Do You Have the Civil Disobedience App?

You might be downloading tomorrow’s law by MAX BORDERS…

If the injustice is part of the necessary friction of the machine of government, let it go, let it go: perchance it will wear smooth — certainly the machine will wear out… but if it is of such a nature that it requires you to be the agent of injustice to another, then I say, break the law. Let your life be a counter-friction to stop the machine. What I have to do is to see, at any rate, that I do not lend myself to the wrong which I condemn. 

 Henry David Thoreau

In the peer-to-peer revolution, the most important elections will happen outside the voting booth. And the most important laws won’t be written by lawmakers.

Consider this: The first time you hopped into a Lyft or an Uber, there was probably, at the very least, a legal gray area associated with that trip. And yet, in your bones, didn’t you think that what you were doing was just, even if it wasn’t yet clearly legal?

If you felt that way, I suspect you weren’t alone.

Today, ridesharing apps are operating in most major cities around the country. And municipalities are having to play catch-up because the people have built massive constituencies around these new services.

This is just one example of what Princeton political scientist James C. Scott calls “Irish democracy,” where people simply stop paying attention to some rule (or ruler) because it has outlived its usefulness.

One need not have an actual conspiracy to achieve the practical effects of a conspiracy. More regimes have been brought, piecemeal, to their knees by what was once called “Irish Democracy,” the silent, dogged resistance, withdrawal, and truculence of millions of ordinary people, than by revolutionary vanguards or rioting mobs.

Now, let’s be clear: the right rules are good things. Laws are like our social operating system, and we need them. But we don’t need all of them, much less all of them to stick around forever. And like our operating systems, our laws need updating. Shouldn’t legal updates happen not by waiting around on politicians but in real time?

“But Max,” you might be thinking. “What about the rule of law? You have to change the law through legitimate processes.”

And that’s not unreasonable. After all, we don’t want mob rule, and we don’t want just anyone to be able to change the law willy-nilly — especially those laws that cover our basic rights and freedoms. There is an important distinction, however, between justice and law, one that’s never easy to unpack. But Henry David Thoreau said it well, when he wrote,

Unjust laws exist; shall we be content to obey them, or shall we endeavor to amend them, and obey them until we have succeeded, or shall we transgress them at once? Men generally, under such a government as this, think that they ought to wait until they have persuaded the majority to alter them. They think that, if they should resist, the remedy would be worse than the evil. But it is the fault of the government itself that the remedy is worse than the evil. It makes it worse. Why is it not more apt to anticipate and provide for reform? Why does it not cherish its wise minority? Why does it cry and resist before it is hurt? Why does it not encourage its citizens to be on the alert to point out its faults, and do better than it would have them?

Today’s peer-to-peer civil disobedience is tomorrow’s emergent law.

In other words, the way the best law has always come about is not through a few wise rulers getting together and writing up statutes; rather, it emerges among people interacting with each other and wanting to avoid conflict. When peaceful people are engaging in peaceful activity, they want to keep it that way. And when people find new and creative ways to interact peacefully, old laws can be obstructions.

So as we engage in peer-to-peer civil disobedience, we are making choices that are leading to the emergence of new law, however slowly and clumsily it follows on. This is a beautiful process, because it requires not the permission of rulers, but rather the assent of peer communities. It is rather like democracy on steroids, except we don’t have to send our prayers up through the voting booth in November.

Legal theorist Bruce Benson calls this future law the “Law Merchant.” He describes matters thus:

A Law Merchant evolves whenever commerce emerges. Practices that facilitated emergence of commerce in medieval Europe were replayed in colonial America, and they are being replayed in Eastern Europe, Eastern Asia, Latin America, and cyberspace. Law Merchant arrangements also support “underground” economic activity when states constrain above-ground market development.

It might be a while before we evolve away from our outmoded system of sending politicians to capitals to make statutes. And the issue of lawmakers playing catch-up with emergent systems may be awkward and kludgy for a while. But when we think that the purpose of law is to help people interact peacefully, peer-to-peer civil disobedience might be a necessary ingredient in reweaving the law for the sake of human flourishing.

ABOUT MAX BORDERS

Max Borders is the editor of The Freeman and director of content for FEE. He is also cofounder of the event experience Voice & Exit and author of Superwealth: Why we should stop worrying about the gap between rich and poor.

The Garage That Couldn’t Be Boarded Up Uber and the jitney … everything old is new again by SARAH SKWIRE

August Wilson. Jitney. 1979.

Last December, I used Uber for the first time. I downloaded the app onto my phone, entered my name, location, and credit card number, and told them where my daughters and I needed to go. The driver picked us up at my home five minutes later. I was able to access reviews that other riders had written for the same driver, to see a photograph of him and of the car that he would be using to pick me up, and to pay and tip him without juggling cash and credit cards and my two kids. Like nearly everyone else I know, I instantly became a fan of this fantastic new invention.

In January, I read Thomas Sowell’s Knowledge and Decisions for the first time. In chapter 8, Sowell discusses the early 20th-century rise of “owner operated bus or taxi services costing five cents and therefore called ‘jitneys,’ the current slang for nickels.” Sowell takes his fuller description of jitneys from transportation economist George W. Hilton’s “American Transportation Planning.”

The jitneys … essentially provided a competitive market in urban transportation with the usual characteristics of rapid entry and exit, quick adaptation to changes in demand, and, in particular,  excellent adaptation to peak load demands. Some 60 percent of the jitneymen were part-time operators, many of whom simply carried passengers for a nickel on trips between home and work.

It sounded strangely familiar.

In February, I read August Wilson’s play, Jitney, written in 1979, about a jitney car service operating in Pittsburgh in the 1970s. As we watch the individual drivers deal with their often tumultuous personal relationships, we also hear about their passengers. The jitney drivers take people to work, to the grocery store, to the pawnshop, to the bus station, and on a host of other unspecified errands. They are an integral part of the community. Like the drivers in Sean Malone’s documentary No Van’s Land, they provide targeted transportation services to a neighborhood under served by public transportation. We see the drivers in Jitney take pride in the way they fit into and take care of their community.

If we gonna be running jitneys out of here we gonna do it right.… I want all the cars inspected. The people got a right if you hauling them around in your car to expect the brakes to work. Clean out your trunk. Clean out the interior of your car. Keep your car clean. The people want to ride in a clean car. We providing a service to the community. We ain’t just giving rides to people. We providing a service.

That service is threatened when the urban planners and improvers at the Pittsburgh Renewal Council decide to board up the garage out of which the jitney service operates and much of the surrounding neighborhood. The drivers are skeptical that the improvements will ever really happen.

Turnbo: They supposed to build a new hospital down there on Logan Street. They been talking about that for the longest while. They supposed to build another part of the Irene Kaufman Settlement House to replace the part they tore down. They supposed to build some houses down on Dinwidee.

Becker: Turnbo’s right. They supposed to build some houses but you ain’t gonna see that. You ain’t gonna see nothing but the tear-down. That’s all I ever seen.

The drivers resolve, in the end, to call a lawyer and refuse to be boarded up. “We gonna run jitneys out of here till the day before the bulldozer come. Ain’t gonna be no boarding up around here! We gonna fight them on that.” They know that continuing to operate will allow other neighborhood businesses to stay open as well. They know that the choice they are offered is not between an improved neighborhood and an unimproved one, but between an unimproved neighborhood and no neighborhood at all. They know that their jitney service keeps their neighborhood running and that it improves the lives of their friends and neighbors in a way that boarded up buildings and perpetually incomplete urban planning projects never will.

Reading Sowell’s book and Wilson’s play in such close proximity got me thinking. Uber isn’t a fantastic new idea. It’s a fantastic old idea that has returned because the omnipresence of smartphones has made running a jitney service easier and more effective. Uber drivers and other ride-sharing services, as we have all read and as No Van’s Land demonstrates so effectively, are subject to protests and interference by competitors, to punitive regulation from local governments, and to a host of other challenges to their enterprise. This push back is nothing new. Sowell notes, “The jitneys were put down in every American city to protect the street railways and, in particular, to perpetuate the cross-subsidization of the street railways’ city-wide fare structures.”

Despite these common problems, Uber and other 21st-century jitney drivers do not face the major challenge that the drivers in Jitney do. They do not need to operate from a centralized location with a phone. Now that we all have phones in our pockets, the Uber “garage” is everywhere. It can’t be boarded up.

ABOUT SARAH SKWIRE

 Sarah Skwire is a fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.

Buffaloed by Obamacare’s Hidden Taxes

Obamacare’s costs are starting to show by D.W. MACKENZIE:

Someone at Buffalo Wild Wings decided to make the costs of the so-called Affordable Care Act (ACA) explicit in the restaurant’s register receipts. An estimated ACA cost of 2 percent was charged to each paying customer.

BW3’s customers complained. Apparently, they’d rather keep these costs hidden. But hiding costs won’t make Obamacare’s higher prices go away.

Adding the cost of a specific government program to a receipt is unusual. Normally, the only tax itemized on register tapes is sales tax — but these are a fraction of the true costs of governmental activity. There are, in fact, too many different government programs to list on each register receipt. Because the price of regulation is usually built into the prices of goods and services, we tend to pay for regulatory costs unwittingly.

Obama’s “Affordable” Care Act imposes regulations, taxes, and subsidies as a means of income redistribution. As usual, the goal is to tax and regulate higher-income people to subsidize those with lower incomes — but that’s never the way things work out.

Real people do not simply pay taxes and regulatory costs as required by written laws. Everyone tries to avoid taxes by whatever means are available. Tax avoidance usually stems from bargaining over prices in markets. Sellers push for higher prices, and buyers push for lower prices.

Sellers have costs to cover: labor, capital, and taxes. It is a simple fact of economics that when an entrepreneur’s taxes rise, he or she will pass part of that additional cost on to customers.

Regulations are de-facto taxes. There is no economic difference between taxing money from someone to fund some activity and a regulatory requirement to achieve the same goal. The ACA is a complex set of taxes.

How do entrepreneurs respond to ACA taxes? The same way they respond to all taxes, explicit or regulatory: by raising the price of whatever they sell.

There is an inescapable fact of taxation: tax burdens are always shared. Taxes charged to upper-income earners for redistribution are in some measure always redistributed to those with lower incomes through price increases.

While ACA benefits have been touted as “free” to lower-income recipients, this proposition is false — and impossible. Somebody always pays for insurance, or any other good. Goods that seem to be paid for by government only appear to be free because their costs are hidden or obscured. Costs of government programs, like the ACA, are just added into the total costs of taxation, and the costs of taxation are partly factored into the prices of all goods.

Taxpayers cannot buy the same amount of goods when final tax-adjusted prices go up. Economists call the effect of taxes on consumer purchases the tax wedge, because taxes drive a wedge between what consumers pay and what entrepreneurs receive. Taxes make goods more expensive for consumers and less profitable for entrepreneurs.

The explicit 2 percent ACA surcharge at Buffalo Wild Wings may or may not have been intended as permanent. The restaurant chain’s executives have already cancelled the policy after customers reacted negatively. But there is a lesson to be learned from the surcharge. Government programs have the superficial appearance of being free, but they never are.

Government’s lack of financial transparency often leads to an ironic outcome: things that appear to be government gifts end up costing more. Why? Because the public sector’s hidden costs mean less cost control in the public sector.

Private enterprises make costs clear with prices. Prices don’t itemize each cost, but because costs are more easily perceived in the private sector, people make greater efforts to control costs. Some find the explicit nature of costs in the private sector unpleasant. Conversely, the fantasy of a free lunch from the state does have a certain emotional appeal. But the inability of most people to perceive the costs of government makes it almost certain that these costs will be higher, compared to the efficiency the private sector can achieve.

As Buffalo Wild Wings made clear, the ACA is just another example of a government program that makes a false promise of free benefits. Rational economic analysis tells us that there ain’t no such thing as a free lunch, yet politicians continue to use that fantasy for political gain.

Let’s abandon the myth of gifts from government. Every action has an economic cost, public or private.

ABOUT D.W. MACKENZIE

D. W. MacKenzie is an assistant professor of economics at Carroll College in Helena, Montana.

8 Goofs in Jonathan Gruber’s Health Care Reform Book

This Obamacare architect’s propaganda piece is a comic of errors by MATT PALUMBO:

In one of life’s bitter ironies, I recently found a book by Jonathan Gruber in the bin of a bookstore’s going-out-of-business sale. It’s called Health Care Reform: What It Is, Why It’s Necessary, How It Works. Interestingly, the book is a comic, which made it a quick read. It’s just the sort of thing that omniscient academics write to persuade ordinary people that their big plans are worth pursuing.

Health Care Reform: What It Is, Why It’s Necessary, How It Works

In case you’ve forgotten — and to compound the irony — Gruber is the Obamacare architect who received negative media attention recently for some controversial comments about the stupidity of the average American voter. In Health Care Reform, Gruber focuses mainly on two topics: an attempted diagnosis of the American health care system, and how the Affordable Care Act (the ACA, or Obamacare) will solve them. I could write a PhD thesis on the myriad fallacies, half-truths, and myths propounded throughout the book. But instead, let’s explore eight of Gruber’s major errors.

Error 1: The mandate forcing individuals to buy health insurance is just like forcing people to buy car insurance, which nobody questions.

This is a disanalogy — and an important one. A person has to purchase car insurance only if he or she gets a car. The individual health insurance mandate forces one to purchase health insurance no matter what. Moreover, what all states but three require for cars is liability insurance, which covers accidents that cause property damage and/or bodily injury. Technically speaking, you’re only required to have insurance to cover damages you might impose on others. If an accident is my fault, liability insurance covers the other individual’s expenses, not my own, and vice versa.

By contrast, if the other driver and I each had collision insurance, we would both be covered for vehicle damage regardless of who was at fault. If collision insurance were mandated, the comparison to health insurance might be apt, because, as with health insurance, collision covers damage to oneself. But no states require collision insurance.

Gruber wants to compare health insurance to car insurance primarily because (1) he wants you to find the mandate unobjectionable, and (2) he wants you to think of the young uninsured (those out of the risk pool) as being sort of like uninsured drivers — people who impose costs on others due to accidents.

But not only is the comparison inapt, Gruber’s real goal is to transfer resources from those least likely to need care (younger, poorer people) to those most likely to need care (older, richer people). The only way mandating health insurance could be like mandating liability car insurance is in preventing the uninsured from shifting the costs of emergent care thanks to federal law. We’ll discuss that as a separate error, next.

Error 2: The emergency room loophole is responsible for increases in health insurance premiums.

In 1986, Reagan passed the Emergency Medical Treatment and Active Labor Act, one provision of which was that hospitals couldn’t reject emergency care to anyone regardless of their ability to pay. This act created the “emergency room loophole,” which allows many uninsured individuals to receive care without paying.

The emergency room loophole does, indeed, increase premiums. There is no free lunch. The uninsured who use emergency rooms can’t pay the bills, and the costs are thus passed on to the insured. So why do I consider this point an error? Because Gruber overstates its role in increasing premiums. “Ever wonder why your insurance premiums keep going up?” he asks rhetorically, as if this loophole is among the primary reasons for premium inflation.

The reality is, spending on emergency rooms (for both the uninsured and the insured) only accounts forroughly 2 percent of all health care spending. Claiming that health insurance premiums keep rising due to something that accounts for 2 percent of health care expenses is like attributing the high price of Starbucks drinks to the cost of their paper cups.

Error 3: Medical bills are the No.1 cause of individual bankruptcies.

Gruber doesn’t include a single reference in the book, so it’s hard to know where he’s getting his information. Those lamenting the problem of medical bankruptcy almost always rely on a 2007 studyconducted by David Himmelstein, Elizabeth Warren, and two other researchers. The authors offered the shocking conclusion that 62 percent of all bankruptcies are due to medical costs.

But in the same study, the authors also claimed that 78 percent of those who went bankrupt actually had insurance, so it would be strange for Gruber to claim the ACA would solve this problem. While it would be unfair to conclude definitively that Gruber relied on this study for his uncited claims, it is one of the only studies I am aware of that could support his claim.

More troublingly, perhaps, a bankruptcy study by the Department of Justice — which had a sample size five times larger than Himmelstein and Warren’s study — found that 54 percent of bankruptcies have no medical debt, and 90 percent have debt under $5,000. A handful of studies that contradict Himmelstein and Warren’s findings include studies by Aparna Mathur at the American Enterprise Institute; David Dranove and Michael Millenson of Northwestern University; Scott Fay, Erik Hurst, and Michelle White (at the universities of Florida, Chicago, and San Diego, respectively); and David Gross of Compass Lexecon and Nicholas Souleles of the University of Pennsylvania.

Why are Himmelstein and Warren’s findings so radically different? Aside from the fact that their study was funded by an organization called Physicians for a National Health Program, the study was incredibly liberal about what it defined as a medical bankruptcy. The study considered any bankruptcy with any amount of medical debt as a medical bankruptcy. Declare bankruptcy with $100,000 in credit card debt and $5 in medical debt? That’s a medical bankruptcy, of course. In fact, only 27 percent of those surveyed in the study had unreimbursed medical debt exceeding $1,000 in the two years prior to declaring bankruptcy.

David Dranove and Michael L. Millenson at the Kellogg School of Management reexamined the Himmelstein and Warren study and could only find a causal relationship between medical bills and bankruptcy in 17 percent of the cases surveyed. By contrast, in Canada’s socialized medical system, the percentage of bankruptcies due to medical expenses is estimated at between 7.1 percent and 14.3 percent. One wonders if the Himmelstein and Warren study was designed to generate a narrative that self-insurance (going uninsured) causes widespread bankruptcy.

Error 4: 20,000 people die each year because they don’t have the insurance to pay for treatment.

If the study this estimate was based on were a person, it could legally buy a beer at a bar. Twenty-one years ago, the American Medical Association released a study estimating the mortality rate of the uninsured to be 25 percent higher than that of the insured. Thus, calculating how many die each year due to a lack of insurance is determined by the number of insured and extrapolating from there how many would die in a given year with the knowledge that they’re 25 percent more likely to die than an insured person.

Even assuming that the 25 percent statistic holds true today, not all insurance is equal. As Gruber notes on page 74 of his book, the ACA is the biggest expansion of public insurance since the creation of Medicare and Medicaid in 1965, as 11 million Americans will be added to Medicaid because of the ACA. So how does the health of the uninsured compare with those on Medicaid? Quite similarly. As indicated by the results from a two-year study in Oregon that looked at the health outcomes of previously uninsured individuals who gained access to Medicaid, Medicaid “generated no significant improvement in measured physical health outcomes.” Medicaid is more of a financial cushion than anything else.

So with our faith in the AMA study intact, all that would happen is a shift in deaths from the “uninsured” to the “publicly insured.” But the figure is still dubious at best. Those who are uninsured could also suffer from various mortality-increasing traits that the insured lack. As Megan McArdle elaborates on these lurking third variables,

Some of the differences we know about: the uninsured are poorer, more likely to be unemployed or marginally employed, and to be single, and to be immigrants, and so forth. And being poor, and unemployed, and from another country, are all themselves correlated with dying sooner.

Error 5: The largest uninsured group is the working poor.

Before Obamacare, had you ever heard that there are 45 million uninsured Americans? It’s baloney. In 2006, 17 million of the uninsured had incomes above $50,000 a year, and eight million of those earned more than $75,000 a year. According to one estimate from 2009, between 12 million and 14 million were eligible for government assistance but simply hadn’t signed up. Another estimate from the same source notes that between 9 million and 10 million of the uninsured are not American citizens. According to the Centers for Disease Control and Prevention, slightly fewer than 8 million of the uninsured are aged 18–24, the group that requires the least amount of medical care and has an average annual income of slightly more than $30,000.

Thus, the largest group of uninsured is not the working poor. It’s the middle class, upper middle class, illegal immigrants, and the young. The working poor who are uninsured are often eligible for assistance but don’t take advantage of it. I recognize that some of these numbers may seem somewhat outdated (the sources for all of them can be found here), but remember: we’re taking account of the erroneous ways Gruber and Obamacare advocates sold the ACA to “stupid” Americans.

Error 6: The ACA will have no impact on premiums in the short term, according to the CBO.

Interesting that there’s no mention of what will happen in the long run. Regardless, not only have there already been premium increases, one widely reported consequence of the ACA has been increases in deductibles. If I told you that I could offer you an insurance plan for a dollar a year, it would seem like a great deal. If I offered you a plan for a dollar a year with a $1 million deductible, you might not think it’s such a great deal.

A report from PricewaterhouseCoopers’ Health Research Institute found that the average cost of a plan sold on the ACA’s exchanges was 4 percent less than the average for an employer-provided plan with similar benefits ($5,844 vs. $6,119), but the deductibles for the ACA plans were 42 percent higher ($5,081 vs. $3,589). The ACA is thus able to swap one form of sticker shock (high premiums) for another (high deductibles). Let us not forget that the ACA exchanges receive federal subsidies. Someone has to pay for those, too.

Error 7: A pay-for-performance model in health care would increase quality and reduce costs.

This proposal seems like common sense in theory, but it’s questionable in reality. Many conservatives and libertarians want a similar model for education, so some might be sympathetic to this aspect of Gruber’s proposal. But there is enormous difficulty in determining how we are to rank doctors.

People respond to incentives, but sometimes these incentives are perverse. Take the example of New York, which introduced a system of “scorecards” to rank cardiologists by the mortality rates of their patients who received coronary angioplasty, a procedure to treat heart disease. Doctors paid attention to their scorecards, and they obviously could increase their ratings by performing more effective surgeries. But as Charles Wheelan noted in his book Naked Statistics, there was another way to improve your scorecard: refuse surgeries on the sickest patients, or in other words, those most likely to die even with care. Wheelan cites a survey of cardiologists regarding the scorecards, where 83 percent stated that due to public mortality statistics, “some patients who might benefit from angioplasty might not receive the procedure.”

Error 8: The ACA “allows you to keep your current policy if you like it… even if it doesn’t meet minimum standards.”

What, does this guy think we’re stupid or something?

The Case Against Rent Control: Bad housing policy harms lower-income people most by Robert P. Murphy

To someone ignorant of economic reasoning, rent control seems like a great policy. It appears instantly to provide “affordable housing” to poor tenants, while the only apparent downside is a reduction in the income flowing to the fat-cat landlords, people who literally own buildings in major cities and who thus aren’t going to miss that money much. Who could object to such a policy?

First, we should define our terms. When a city government imposes rent control, it means the city makes it illegal for landlords to charge tenants rent above a ceiling price. Sometimes that price can vary, but only on specified factors. For the law to have any teeth — and for the politicians who passed it to curry favor with the public — the maximum rent-controlled price will be significantly lower than the free-market price.

The most obvious problem is that rent control immediately leads to a shortage of apartments, meaning that there are potential tenants who would love to move into a new place at the going (rent-controlled) rate, but they can’t find any vacancies. At a lower rental price, more tenants will try to rent apartment units, and at a higher rental price, landlords will try to rent out more apartment units. These two claims are specific instances of the law of demand and law of supply, respectively.

In an unhampered market, the equilibrium rental price occurs where supply equals demand, and the market rate for an apartment perfectly matches tenants with available units. If the government disrupts this equilibrium by setting a ceiling far below the market-clearing price, then it creates a shortage; that is, more people want to rent apartment units than landlords want to provide. If you’ve lived in a big city, you may have experienced firsthand how difficult it is to move into a new apartment; guides advise people to pay the high fee to a broker or even join a church because you have to “know somebody” to get a good deal. Rent control is why this pattern occurs. The difficulty isn’t due to apartments being a “big-ticket” item; new cars are expensive, too, but finding one doesn’t carry the stress of finding an apartment in Brooklyn. The difference is rent control.

Rent control reduces the supply of rental units through two different mechanisms. In the short run, where the physical number of apartment units is fixed, the imposition of rent control will reduce the quantity of units offered on the market. The owners will hold back some of the potential units, using them for storage or keeping them available for (say) out of town guests or kids returning from college for the summer. (If this sounds implausible, consider just how many people in a major city consider renting out spare bedrooms in their homes, as long as the price is right.)

In the long run, a permanent policy of rent control restricts the construction of new apartment buildings, because potential investors realize that their revenues on such projects will be artificially capped. Building a movie theater or shopping center is more attractive on the margin.

There are further, more insidious problems with rent control. With a long line of potential tenants eager to move in at the official ceiling price, landlords do not have much incentive to maintain the building. They don’t need to put on new coats of paint, change the light bulbs in the hallways, keep the elevator in working order, or get out of bed at 5:00 a.m. when a tenant complains that the water heater is busted. If there is a rash of robberies in and around the building, the owner won’t feel a financial motivation to install lights, cameras, buzz-in gates, a guard, or other (costly) measures to protect his customers. Furthermore, if a tenant falls behind on the rent, there is less incentive for the landlord to cut her some slack, because he knows he can replace her right away after eviction. In other words, all of the behavior we associate with the term “slumlord” is due to the government’s policy of rent control; it is not the “free market in action.”

In summary, if the goal is to provide affordable housing to lower-income tenants, rent control is a horrible policy. Rent control makes apartments cheaper for some tenants while making them infinitely expensive for others, because some people can no longer find a unit, period, even though they would have been able to at the higher, free-market rate. Furthermore, the people who remain in apartments — enjoying the lower rent —receive a much lower-quality product. Especially when left in place for decades, rent control leads to abusive landlords and can quite literally destroy large portions of a city’s housing.

20141014_RobertMurphyABOUT ROBERT P. MURPHY

Robert P. Murphy has a PhD in economics from NYU. He is the author of The Politically Incorrect Guide to Capitalism and The Politically Incorrect Guide to The Great Depression and the New Deal. He is also the Senior Economist with the Institute for Energy Research and a Research Fellow at the Independent Institute. You can find him at http://consultingbyrpm.com/

EDITORS NOTE: The featured image is courtesy of FEE and Shutterstock.

How Far Can the P2P Revolution Go? Will the sharing economy replace the State? by Jeffrey A. Tucker

How far can the peer-to-peer revolution be pushed? It’s time we start to speculate, because history is moving fast. We need to dislodge from our minds our embedded sense of what’s possible.

Right now, we can experience a form of commercial relationship that was unknown just a decade ago. If you need a ride in a major city, you can pull up the smartphone app for Uber or Lyft and have a car arrive in minutes. It’s amazing to users because they get their first taste of what consumer service in taxis really feels like. It’s luxury at a reasonable price.

If your sink is leaking, you can click TaskRabbit. If you need a place to stay, you can count on Airbnb. In Manhattan, you can depend on WunWun to deliver just about anything to your door, from toothpaste to a new desktop computer. If you have a skill and need a job, or need to hire someone, you can go to oDesk or eLance and post a job you can do or a job you need done. If you grow food or make great local dishes, you can post at a place like credibles.co and find a prepaid customer base.

These are the technologies of the peer-to-peer or sharing economy. You can be a producer, a consumer, or both. It’s a different model — one characterized by the word “equipotency,” meaning that the power to buy and sell is widely distributed throughout the population. It’s made possible through technology.

The emergence of the app economy — an emergent order not created by government or legislation — has enabled these developments, and they are changing the world.

These technologies are not temporary. They cannot and will not be uninvented. On the contrary, they will continue to develop and expand in both sophistication and in geographic relevance. This is what happens when technology is especially useful. Whether it is the horseshoe of the Middle Ages or the distributed networks of our time, when an innovation so dramatically improves our lives, it changes the course of history. This is what is happening in our time.

The applications of these P2P networks are enormously surprising. The biggest surprise in my own lifetime is how they have been employed to make payment systems P2P — no longer based on third-party trust — through what’s called the blockchain. The blockchain can commodify and title any bundle of information and make it transferable, with timestamps, in a way that cannot be forged, all at nearly zero cost.

An offshoot of blockchain-distributed technology has been the invention of a private currency. For half a century, it has been a dream of theorists who saw that taking money out of government hands would do more for prosperity and peace than any single other step.

The theorists dreamed, but they didn’t have the tools. They hadn’t been invented yet. Now that the tools exist, the result is bitcoin, which gives rise to the hope that we have the makings of a new international currency managed entirely by the private sector and the global market system.

These new P2P systems have connected the world like never before. They hold out the prospect of unleashing unprecedented human energy and the creativity that comes with it. They give billions of people a chance to integrate themselves into the worldwide division of labor from which they have thus far been excluded.

With 3-D printing and computer-aided design files distributed on digital networks, more people have access to become their own manufacturers. These same people can be designers and distribute the results to the world. Such a system cuts out every barrier that stands between people and their material aspirations — barriers such as product regulation, patents, and excise taxes.

It’s time that we begin to expect the unexpected. What else is possible?

Entrepreneurs are already experimenting with an Uber model of delivering some form of health care online. In some areas, they will bring a nurse to you to give you a flu shot. Other health services are on the way, causing some to speculate on the return to at-home medical visits paid out of pocket (rather than via insurance).

What does this innovation do for centralist solutions like Obamacare? It changes the entire dynamic of service provision. The medical establishment is already protesting that this consumer-based, one-off service approach runs contrary to primary and preventive care — a critique that fails to consider that there is no reason why P2P technology can’t provide such care.

How much can things change? To what extent will they affect the structure of our political lives? This is where matters get really interesting. A feature of P2P is the gradual elimination of third parties as agents who stand between individuals and their desire to cooperate one to one. We use such third parties because we believe we need them. Credit card companies serve a need. Banks serve a need. Large-scale corporations serve a need.

One theory holds that the State exists to do for us what we can’t do for ourselves. It’s the ultimate third-party provider. We elect people to serve as our representatives, and they bring our voices to the business of government so that we can get the services we want. That’s the idea, anyway.

But once government gets the power to do things, it expands its power in the interest of the ruling elite. The taxicab monopoly was no more necessary than the government postal service, but the growth of P2P technology has increasingly exposed the reality of how unnecessary the State as a third-party mediator really is. The post office is being pushed into obsolescence. It’s hard to see how the municipal taxi monopoly can survive a competitive contest with P2P technology systems.

Policing is an example of a service that people think is absolutely necessary. The old perception is that government needs to provide this service because most people cannot do it for themselves. But what if policing, too, could employ P2P technology?

What if, when there is a threat, whether to you or to others, you could open an app on your phone and call the private police immediately? You can imagine how such a technology could learn to filter out static and discern threat level based on algorithms and immediately supplied video evidence. We already see the first attempts in this direction with the Peacekeeper app.

Rather than a tax-funded system that has become a threat to the innocent as much as the guilty, we would have a system rooted in consumer service. It might be similar to the private security systems used by all businesses today, except it would apply to individuals. It would survive not through taxation but subscription — voluntary and noncoercive.

How much further can we take this? Can courts and laws themselves be ported to the online world, using the blockchain for verifying contracts, managing conflicts, and even issuing securities? The large retailerOverstock.com is experimenting with this idea — not for ideological reasons but simply because such systems work better.

And here we find the most compelling case for optimism for the cause of human liberty. These technologies are emerging from within the private sector, not from government. They work better to serve human needs than the public-sector alternative. Their use and their growth depend not on ideological conversion but on their capacity to serve universal human needs.

The ground really is shifting beneath our feet, despite all odds. It is still an age of leviathan. But based on technology and the incredible creativity of entrepreneurs, that leviathan no longer seems like a permanent feature of the world.

20121129_JeffreyTuckeravatarABOUT JEFFREY A. TUCKER

Jeffrey Tucker is a distinguished fellow at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events.