Posts

Well, Back to Smoking: FDA Bans 99 Percent of E-Cigarettes by Guy Bentley

The Food and Drug Administration (FDA) published long-awaited rules Thursday that could ban 99 percent of e-cigarette products and wreck industry innovation for years to come.

Passed in 2009, the Tobacco Control Act says all e-cigarette products released after February 15, 2007, (predicate date) will have to go through the Pre-Market Tobacco Applications process (PMTA). FDA officials claim they cannot change the predicate date.

The PMTA is ruinously expensive and can cost millions of dollars per product and by the FDA’s own admission will take more than 1,700 hours for an applicant to complete.

Since almost all vapor products on the market were released after February 2007, hardly any will avoid a PMTA and almost no businesses, with the exception of big tobacco companies, will be able to bear the regulatory burden.

“The agency’s economic analysis of the rule predicts that the cost of such approvals will be so high that approximately 99 percent of products on the market will not even be put through the application process,” says the American Vaping Association (AVA).

The rules usher in a new era of federal regulation, with sales of vapor products to those under the age of 18 banned nationwide. Most states had already passed laws banning e-cigarette sales to minors.

“This final rule is a foundational step that enables the FDA to regulate products young people were using at alarming rates, like e-cigarettes, cigars and hookah tobacco, that had gone largely unregulated,” Mitch Zeller, director of the FDA’s Center for Tobacco Products, said in a press release. The FDA will now set industry standards for manufacturing and labeling. The rules will take effect in 90 days.

But there is still hope for the industry yet after a House Appropriations committee passed an amendment April 19, which would alter the predicate date. The amendment is not yet law and will have to pass through the House of Representatives.

If the amendment fails however and the FDA regulations stand, the industry will have two years to comply with the PMTA.

“Despite an overabundance of distorted and misleading information propagated by some in the public health community, the science is clear – responsibly manufactured vapor products are not only a safer alternative to traditional combustible products, but also provide smokers with a viable path to reducing their tobacco consumption and quitting altogether,” said Tony Abboud, the Vapor Technology Association’s National Legislative Director.

“Today’s action by the FDA will do nothing to improve our nations’ public health objectives. To the contrary, today’s action will yank responsibly manufactured vapor products from the hands of adult smokers and replace them with the tobacco cigarettes they had been trying to give up.”

The VTA argue the FDA’s rules will kill almost a decade of innovation in the e-cigarette space and put thousands of small and mid-size businesses out of businesses to the benefit of major tobacco companies.

“If, in the name of public health, federal regulations inhibit much-needed innovation in the e-cigarette market, public health would actually suffer, as fewer adult smokers would be likely to switch from smoking,” said the National Center for Public Policy Research’s director of Risk Analysis, Jeff Stier.

“One only needs to look at the rapid innovation coming from the vaping industry to see how devastating this rule will be,” Jared Meyer, Fellow at the Manhattan Institute, told The Daily Caller News Foundation in an emailed statement.

“While large tobacco companies will likely be able to absorb these costs, countless small manufacturers will be put out of business – leading to a less dynamic market. Without continued innovation, it will be harder from cigarette smokers to kick their deadly habit by taking up a much less harmful form of nicotine consumption,” Meyer added.

According to Wells Fargo, e-cigarette sales amounted to $3.5 billion in 2015. The case for wide-spread e-cigarette use was given a boost April 27 after the Royal College of Physicians published a 200-page report supporting the products as a smoking cessation method.

Reprinted with permission  from the Daily Caller News Foundation.

Guy BentleyGuy Bentley

Guy Bentley is a reporter for the Daily Caller.

The 50-Year Disaster of Government Trains, Buses, and Streetcars by Daniel Bier

Today, Less than 2% of Trips Use Public Mass Transit.

Ronald Reagan once quipped that “government’s view of the economy could be summed up in a few short phrases: If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidize it.”

There, in a nutshell, you have a short history of mass transit in America. CEI’s Marc Scribner explains,

Following decades of excessive local government fare regulation that led to a terminal decline in the private mass transit industry, government began taking over the responsibilities performed by now-bankrupt private mass transit companies following the Urban Mass Transportation Act of 1964.

Over the span of a decade, the mostly-private mass transit industry was replaced by government transit monopolies.

As a result, for the last several decades, government at all levels has spent trillions on mass transit, subsidizing fares, expanding lines, and building vast new rail systems. Today, transit consumes more than 25 percent of all surface transportation funds (which mostly come from non-transit users through gas taxes).

What was the result of this tidal wave of taxpayer cash?

Despite receiving more than one-fourth of the funding, mass transit still represents less than 2 percent of trips taken nationwide. Even when one looks only at commuting, where trains and buses do best, mass transit’s national mode share is less than 5 percent — down from more than 6 percent in 1980.

That’s right: after receiving a massive and disproportionate share of taxpayer funding, totaling trillions of dollars, transit’s share of commutes declined.

But government transit monopolies keep lobbying for more and more funding. They claim the real problem is that public transit systems haven’t been expanded enough to draw more people into using them. Scribner calls this theField of Dreams theory: “If you build it, they will come.

The problem with this theory is that it’s bogus. Research from Steven Polzin shows that the capacity of transit networks, including buses, streetcars, and trains, has nearly tripled since 1970, while absolute ridership has grown by just a fraction of that. Transit trips per capita have been dead flat since the 1970s.

Polzin writes, “Supply has grown far more rapidly than demand for the past several decades. This is a report card on productivity that mom and dad would hardly be proud of.”

Meaning: we built it; they didn’t come.

Scribner concludes,

The trillions spent on mass transit have given governments many more empty buses and trains, but very little in terms of additional ridership. …

Mass transit can serve a very important, albeit narrow, purpose for people in limited settings. There is a reason that 40 percent of all US mass transit trips take place in the New York City metro area.

But it is wholly irresponsible for politicians to continue mass transit’s taxpayer gravy train, which is based on less substance than Kevin Costner’s dramatized auditory hallucinations.

When the next flashy transit project comes to your town, remember to be skeptical. Proponents of light rail, streetcars, and other hugely expensive projects routinely overestimate how many people will use the line and underestimate how much it will cost to build and run. Decades of evidence shows that if you build it, people will still probably drive — and you’ll still be stuck paying for it.

Daniel BierDaniel Bier

Daniel Bier is the editor of FEE.org. He writes on issues relating to science, civil liberties, and economic freedom.

Do European Labor Laws Lead to Terrorism? by Alex Tabarrok

Why are there poor Muslim ghettos in Europe but not in the United States?

In Belgium, high unemployment and crime-ridden Muslim ghettos have fomented radicalism, but as Jeff Jacoby writes:

Muslims in the United States … have had no problem acclimating to mainstream norms. In a detailed 2011 survey, the Pew Research Center found that Muslim Americans are “highly assimilated into American society and … largely content with their lives.”

More than 80 percent of US Muslims expressed satisfaction with life in America, and 63 percent said they felt no conflict “between being a devout Muslim and living in a modern society.”

The rates at which they participate in various everyday American activities — from following local sports teams to watching entertainment TV — are similar to those of the American public generally. Half of all Muslim immigrants display the US flag at home, in the office, or on their car.

Jacoby, however, doesn’t explain why these differences exist. One reason is the greater flexibility of American labor markets compared to those in Europe.

Institutions that make it more difficult to hire and fire workers or adjust wages can increase unemployment and reduce employment, especially among immigrant youth. Firms will be less willing to hire if it is very costly to fire. As Tyler and I put it in Modern Principles, how many people will want to go on a date if every date requires a marriage?

The hiring hurdle is especially burdensome for immigrants given the additional real or perceived uncertainty from hiring immigrants. One of the few ways that immigrants can compete in these situations is by offering to work for lower wages. But if that route is blocked by minimum wages, or requirements that every worker receive significant non-wage benefits, unemployment and non-employment among immigrants will be high — generating disaffection, especially among the young.

Huber, for example, (see also Angrist and Kuglerfinds:

Countries with more centralized wage bargaining, stricter product market regulation and countries with a higher union density, have worse labour market outcomes for their immigrants relative to natives even after controlling for compositional effects.

The problem of labor market rigidity is especially acute in Belgium, where the differences between native and immigrant unemployment, employment and wages are among the highest in the OECD. Language difficulties and skills are one reason, but labor market rigidity is another, as this OECD report makes clear:

Belgian labour market settings are generally unfavourable to the employment outcomes of low-skilled workers. Reduced employment rates stem from high labour costs, which deter demand for low-productivity workers…

Furthermore, labour market segmentation and rigidity weigh on the wages and progression prospects of outsiders. With immigrants over-represented among low-wage, vulnerable workers, labour market settings likely hurt the foreign-born disproportionately. …

Minimum wages can create a barrier to employment of low-skilled immigrants, especially for youth. As a proportion of the median wage, the Belgian statutory minimum wage is on the high side in international comparison and sectoral agreements generally provide for even higher minima. This helps to prevent in-work poverty … but risks pricing low-skilled workers out of the labour market (Neumark and Wascher, 2006).

Groups with further real or perceived productivity handicaps, such as youth or immigrants, will be among the most affected.

In 2012, the overall unemployment rate in Belgium was 7.6% (15-64 age group), rising to 19.8% for those in the labour force aged under 25, and, among these, reaching 29.3% and 27.9% for immigrants and their native-born offspring, respectively.

Immigration can benefit both immigrants and natives but achieving those benefits requires the appropriate institutions especially open and flexible labor markets.

This post first appeared at Marginal Revolution.

Alex TabarrokAlex Tabarrok

Alex Tabarrok is a professor of economics at George Mason University. He blogs at Marginal Revolution with Tyler Cowen.

Socialism Is Harder than You Think by Scott Sumner

Suppose you wanted to switch to socialism — what would be the ideal place to do so? You’d want a country with extremely high quality civil servants.

That would be France.

You’d want a country where socialism is not a dirty word, and capitalism is.

That would be France.

You’d want a country with the Socialist party in power, a party that was committed to enact the ideas of Thomas Piketty.

That would be France.

So how did things work out in France, when they tried to adopt a Bernie Sanders/Thomas Piketty approach to taxes?

IN THE eyes of many foreigners, two numbers encapsulate French economic policy over the past decade or so: 75 and 35. The first refers to the top income-tax rate of 75%, promised by François Hollande to seduce the left when he was the Socialist presidential candidate in 2012. The second is the 35-hour maximum working week, devised by a Socialist government in 2000 and later retained by the centre-right.

Each has been a totem of French social preferences. Yet, to the consternation of some of his voters, Mr Hollande applied the 75% tax rate for only two years, and then binned it. Now he has drawn up plans that could, in effect, demolish the 35-hour week, too.

Mr Hollande’s government is reviewing a draft labour law that would remove a series of constraints French firms face, both when trying to adapt working time to shifting business cycles and when deciding whether to hire staff. In particular, it devolves to firms the right to negotiate longer hours and overtime rates with their own trade unions, rather than having to follow rules dictated by national industry-wide deals.

The 35-hour cap would remain in force, but it would become more of a trigger for overtime pay than a rigid constraint on hours worked. These could reach 46 hours a week, for a maximum of 16 weeks. Firms would also have greater freedom to shorten working hours and reduce pay, which can currently be done only in times of “serious economic difficulty”. Emmanuel Macron, the economy minister, has called such measures the “de facto” end of the 35-hour week.

At the same time, the law would lower existing high barriers to laying off workers. These discourage firms from creating permanent jobs, and leave huge numbers of “outsiders”, particularly young people, temping.

For one thing, it would cap awards for unfair dismissal, which are made by labour tribunals. Laid-off French workers bring such cases frequently; they can take years and cost anything from €2,500 to €310,000 ($2,700 to $337,000) by one estimate.

Unfortunately, while France is moving away from these polices, the US is like to move some distance in their direction. Of course there are differences. Our minimum wage is still lower than in France, and our top income tax rate is closer to 50% in states like California and New York. But all the momentum is with the socialists, who are especially numerous among the younger voters.

Socialist ideas are superficially appealing. Paul Krugman (who favors very high income tax rates on the rich) often says that reality has a liberal bias. Actually, reality has a neoliberal bias, and if you don’t take incentive effects into account, you may end up disappointed.

Back in the US, Sander’s single payer approach also has problems:

A costing of Mr Sanders’s plans by Kenneth Thorpe of Emory University, using more conservative assumptions, found that the plan was underfunded by nearly $1.1 trillion (or 6% of GDP) per year. If Mr Thorpe is right, higher taxes will be required to make the sums add up. In 2014 Mr Sanders’ own state, Vermont, abandoned a plan for a single-payer system on the basis that the required tax rises would be too great.

Vermont is one of the most liberal states in the union. Now think about the fact that they gave up on the idea, despite it having been previously approved and signed into law. Then think about the concept of rolling out a multi-trillion dollar plan at the federal level, soon after the only experiment at the state level failed to get off the ground.

Is that evidence-based liberalism, or wishful thinking?

This post first appeared at Econlog.

Scott SumnerScott Sumner

Scott B. Sumner is the director of the Program on Monetary Policy at the Mercatus Center and a professor at Bentley University. He blogs at the Money Illusion and Econlog.

Policy Science Kills: The Case of Eugenics by Jeffrey A. Tucker

The climate-change debate has many people wondering whether we should really turn over public policy — which deals with fundamental matters of human freedom — to a state-appointed scientific establishment. Must moral imperatives give way to the judgment of technical experts in the natural sciences? Should we trust their authority? Their power?

There is a real history here to consult. The integration of government policy and scientific establishments has reinforced bad science and yielded ghastly policies.

An entire generation of academics, politicians, and philanthropists used bad science to plot the extermination of undesirables.

There’s no better case study than the use of eugenics: the science, so called, of breeding a better race of human beings. It was popular in the Progressive Era and following, and it heavily informed US government policy. Back then, the scientific consensus was all in for public policy founded on high claims of perfect knowledge based on expert research. There was a cultural atmosphere of panic (“race suicide!”) and a clamor for the experts to put together a plan to deal with it. That plan included segregation, sterilization, and labor-market exclusion of the “unfit.”

Ironically, climatology had something to do with it. Harvard professor Robert DeCourcy Ward (1867–1931) is credited with holding the first chair of climatology in the United States. He was a consummate member of the academic establishment. He was editor of the American Meteorological Journal, president of the Association of American Geographers, and a member of both the American Academy of Arts and Sciences and the Royal Meteorological Society of London.

He also had an avocation. He was a founder of the American Restriction League. It was one of the first organizations to advocate reversing the traditional American policy of free immigration and replacing it with a “scientific” approach rooted in Darwinian evolutionary theory and the policy of eugenics. Centered in Boston, the league eventually expanded to New York, Chicago, and San Francisco. Its science inspired a dramatic change in US policy over labor law, marriage policy, city planning, and, its greatest achievements, the 1921 Emergency Quota Act and the 1924 Immigration Act. These were the first-ever legislated limits on the number of immigrants who could come to the United States.

Nothing Left to Chance

“Darwin and his followers laid the foundation of the science of eugenics,” Ward alleged in his manifesto published in the North American Review in July 1910. “They have shown us the methods and possibilities of the product of new species of plants and animals…. In fact, artificial selection has been applied to almost every living thing with which man has close relations except man himself.”

“Why,” Ward demanded, “should the breeding of man, the most important animal of all, alone be left to chance?”

By “chance,” of course, he meant choice.

“Chance” is how the scientific establishment of the Progressive Era regarded the free society. Freedom was considered to be unplanned, anarchic, chaotic, and potentially deadly for the race. To the Progressives, freedom needed to be replaced by a planned society administered by experts in their fields. It would be another 100 years before climatologists themselves became part of the policy-planning apparatus of the state, so Professor Ward busied himself in racial science and the advocacy of immigration restrictions.

Ward explained that the United States had a “remarkably favorable opportunity for practising eugenic principles.” And there was a desperate need to do so, because “already we have no hundreds of thousands, but millions of Italians and Slavs and Jews whose blood is going into the new American race.” This trend could cause Anglo-Saxon America to “disappear.” Without eugenic policy, the “new American race” will not be a “better, stronger, more intelligent race” but rather a “weak and possibly degenerate mongrel.”

Citing a report from the New York Immigration Commission, Ward was particularly worried about mixing American Anglo-Saxon blood with “long-headed Sicilians and those of the round-headed east European Hebrews.”

Keep Them Out

“We certainly ought to begin at once to segregate, far more than we now do, all our native and foreign-born population which is unfit for parenthood,” Ward wrote. “They must be prevented from breeding.”

But even more effective, Ward wrote, would be strict quotas on immigration. While “our surgeons are doing a wonderful work,” he wrote, they can’t keep up in filtering out people with physical and mental disabilities pouring into the country and diluting the racial stock of Americans, turning us into “degenerate mongrels.”

Such were the policies dictated by eugenic science, which, far from being seen as quackery from the fringe, was in the mainstream of academic opinion. President Woodrow Wilson, America’s first professorial president, embraced eugenic policy. So did Supreme Court Justice Oliver Wendell Holmes Jr., who, in upholding Virginia’s sterilization law, wrote, “Three generations of imbeciles are enough.”

Looking through the literature of the era, I am struck by the near absence of dissenting voices on the topic. Popular books advocating eugenics and white supremacy, such as The Passing of the Great Race by Madison Grant, became immediate bestsellers. The opinions in these books — which are not for the faint of heart — were expressed long before the Nazis discredited such policies. They reflect the thinking of an entire generation, and are much more frank than one would expect to read now.

It’s crucial to understand that all these opinions were not just about pushing racism as an aesthetic or personal preference. Eugenics was about politics: using the state to plan the population. It should not be surprising, then, that the entire anti-immigration movement was steeped in eugenics ideology. Indeed, the more I look into this history, the less I am able to separate the anti-immigrant movement of the Progressive Era from white supremacy in its rawest form.

Shortly after Ward’s article appeared, the climatologist called on his friends to influence legislation. Restriction League president Prescott Hall and Charles Davenport of the Eugenics Record Office began the effort to pass a new law with specific eugenic intent. It sought to limit the immigration of southern Italians and Jews in particular. And immigration from Eastern Europe, Italy, and Asia did indeed plummet.

The Politics of Eugenics

Immigration wasn’t the only policy affected by eugenic ideology. Edwin Black’s War Against the Weak: Eugenics and America’s Campaign to Create a Master Race(2003, 2012) documents how eugenics was central to Progressive Era politics. An entire generation of academics, politicians, and philanthropists used bad science to plot the extermination of undesirables. Laws requiring sterilization claimed 60,000 victims. Given the attitudes of the time, it’s surprising that the carnage in the United States was so low. Europe, however, was not as fortunate.

Freedom was considered to be unplanned, anarchic, chaotic, and potentially deadly for the race. 

Eugenics became part of the standard curriculum in biology, with William Castle’s 1916 Genetics and Eugenicscommonly used for over 15 years, with four iterative editions.

Literature and the arts were not immune. John Carey’s The Intellectuals and the Masses: Pride and Prejudice Among the Literary Intelligentsia, 1880–1939 (2005) shows how the eugenics mania affected the entire modernist literary movement of the United Kingdom, with such famed minds as T.S. Eliot and D.H. Lawrence getting wrapped up in it.

Economics Gets In on the Act

Remarkably, even economists fell under the sway of eugenic pseudoscience. Thomas Leonard’s explosively brilliant Illiberal Reformers: Race, Eugenics, and American Economics in the Progressive Era (2016) documents in excruciating detail how eugenic ideology corrupted the entire economics profession in the first two decades of the 20th century. Across the board, in the books and articles of the profession, you find all the usual concerns about race suicide, the poisoning of the national bloodstream by inferiors, and the desperate need for state planning to breed people the way ranchers breed animals. Here we find the template for the first-ever large-scale implementation of scientific social and economic policy.

Students of the history of economic thought will recognize the names of these advocates: Richard T. Ely, John R. Commons, Irving Fisher, Henry Rogers Seager, Arthur N. Holcombe, Simon Patten, John Bates Clark, Edwin R.A. Seligman, and Frank Taussig. They were the leading members of the professional associations, the editors of journals, and the high-prestige faculty members of the top universities. It was a given among these men that classical political economy had to be rejected. There was a strong element of self-interest at work. As Leonard puts it, “laissez-faire was inimical to economic expertise and thus an impediment to the vocational imperatives of American economics.”

Irving Fisher, whom Joseph Schumpeter described as “the greatest economist the United States has ever produced” (an assessment later repeated by Milton Friedman), urged Americans to “make of eugenics a religion.”

Speaking at the Race Betterment Conference in 1915, Fisher said eugenics was “the foremost plan of human redemption.” The American Economic Association (which is still today the most prestigious trade association of economists) published openly racist tracts such as the chilling Race Traits and Tendencies of the American Negro by Frederick Hoffman. It was a blueprint for the segregation, exclusion, dehumanization, and eventual extermination of the black race.

Hoffman’s book called American blacks “lazy, thriftless, and unreliable,” and well on their way to a condition of “total depravity and utter worthlessness.” Hoffman contrasted them with the “Aryan race,” which is “possessed of all the essential characteristics that make for success in the struggle for the higher life.”

Even as Jim Crow restrictions were tightening against blacks, and the full weight of state power was being deployed to wreck their economic prospects, the American Economic Association’s tract said that the white race “will not hesitate to make war upon those races who prove themselves useless factors in the progress of mankind.”

Richard T. Ely, a founder of the American Economic Association, advocated segregation of nonwhites (he seemed to have a special loathing of the Chinese) and state measures to prohibit their propagation. He took issue with the very “existence of these feeble persons.” He also supported state-mandated sterilization, segregation, and labor-market exclusion.

That such views were not considered shocking tells us so much about the intellectual climate of the time.

If your main concern is who is bearing whose children, and how many, it makes sense to focus on labor and income. Only the fit should be admitted to the workplace, the eugenicists argued. The unfit should be excluded so as to discourage their immigration and, once here, their propagation. This was the origin of the minimum wage, a policy designed to erect a high wall to the “unemployables.”

Women, Too

Another implication follows from eugenic policy: government must control women.

It must control their comings and goings. It must control their work hours — or whether they work at all. As Leonard documents, here we find the origin of the maximum-hour workweek and many other interventions against the free market. Women had been pouring into the workforce for the last quarter of the 19th century, gaining the economic power to make their own choices. Minimum wages, maximum hours, safety regulations, and so on passed in state after state during the first two decades of the 20th century and were carefully targeted to exclude women from the workforce. The purpose was to control contact, manage breeding, and reserve the use of women’s bodies for the production of the master race.

Leonard explains:

American labor reformers found eugenic dangers nearly everywhere women worked, from urban piers to home kitchens, from the tenement block to the respectable lodging house, and from factory floors to leafy college campuses. The privileged alumna, the middle-class boarder, and the factory girl were all accused of threatening Americans’ racial health.

Paternalists pointed to women’s health. Social purity moralists worried about women’s sexual virtue. Family-wage proponents wanted to protect men from the economic competition of women. Maternalists warned that employment was incompatible with motherhood. Eugenicists feared for the health of the race.

“Motley and contradictory as they were,” Leonard adds, “all these progressive justifications for regulating the employment of women shared two things in common. They were directed at women only. And they were designed to remove at least some women from employment.”

The Lesson We Haven’t Learned

Today we find eugenic aspirations to be appalling. We rightly value the freedom of association. We understand that permitting people free choice over reproductive decisions does not threaten racial suicide but rather points to the strength of a social and economic system. We don’t want scientists using the state to cobble together a master race at the expense of freedom. For the most part, we trust the “invisible hand” to govern demographic trajectories, and we recoil at those who don’t.

But back then, eugenic ideology was conventional scientific wisdom, and hardly ever questioned except by a handful of old-fashioned advocates of laissez-faire. The eugenicists’ books sold in the millions, and their concerns became primary in the public mind. Dissenting scientists — and there were some — were excluded by the profession and dismissed as cranks attached to a bygone era.

Eugenic views had a monstrous influence over government policy, and they ended free association in labor, marriage, and migration. Indeed, the more you look at this history, the more it becomes clear that white supremacy, misogyny, and eugenic pseudoscience were the intellectual foundations of modern statecraft.

Today we find eugenic aspirations to be appalling, but back then, eugenic ideology was conventional scientific wisdom.

Why is there so little public knowledge of this period and the motivations behind its progress? Why has it taken so long for scholars to blow the lid off this history of racism, misogyny, and the state?

The partisans of the state regulation of society have no reason to talk about it, and today’s successors of the Progressive Movement and its eugenic views want to distance themselves from the past as much as possible. The result has been a conspiracy of silence.

There are, however, lessons to be learned. When you hear of some impending crisis that can only be solved by scientists working with public officials to force people into a new pattern that is contrary to their free will, there is reason to raise an eyebrow. Science is a process of discovery, not an end state, and its consensus of the moment should not be enshrined in the law and imposed at gunpoint.

We’ve been there and done that, and the world is rightly repulsed by the results.

Jeffrey A. TuckerJeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.

Americans’ Incomes Are Unequal, But Mobile by Chelsea German

Americans often move between different income brackets over the course of their lives. As covered in an earlier blog post, over 50 percent of Americans find themselves among the top 10 percent of income-earners for at least one year during their working lives, and over 11 percent of Americans will be counted among the top 1 percent of income-earners for at least one year.

Fortunately, a great deal of what explains this income mobility are choices that are largely within an individual’s control. While people tend to earn more in their “prime earning years” than in their youth or old age, other key factors that explain income differences are education level, marital status, and number of earners per household. As Mark Perry recently wrote:

The good news is that the key demographic factors that explain differences in household income are not fixed over our lifetimes and are largely under our control (e.g. staying in school and graduating, getting and staying married, etc.), which means that individuals and households are not destined to remain in a single income quintile forever.

According to the economist Thomas Sowell, whom Perry cites, “Most working Americans, who were initially in the bottom 20% of income-earners, rise out of that bottom 20%. More of them end up in the top 20% than remain in the bottom 20%.”

While people move between income groups over their lifetime, many worry that income inequality between different income groups is increasing. The growing income inequality is real, but its causes are more complex than the demagogues make them out to be.

Consider, for example, the effect of “power couples,” or people with high levels of education marrying one another and forming dual-earner households. In a free society, people can marry whoever they want, even if it does contribute to widening income disparities.

Or consider the effects of regressive government regulations on exacerbating income inequality. These include barriers to entry that protect incumbent businesses and stifle competition. To name one extreme example, Louisiana recently required a government-issued license to become a florist.

Lifting more of these regressive regulations would aid income mobility and help to reduce income inequality, while also furthering economic growth.

This post first appeared at HumanProgress.org.

Chelsea GermanChelsea German

Chelsea German works at the Cato Institute as a Researcher and Managing Editor of HumanProgress.org.

Low-Skilled Workers Flee the Minimum Wage: How State Lawmakers Exile the Needy by Corey Iacono

What happens when, in a country where workers are free to move, a region raises its minimum wage? Do those with the fewest skills seek out the regions with the highest wage floors?

New minimum wage research by economist Joan Monras of the Paris Institute of Political Studies (Sciences Po) attempts to answer that question. Monras theoretically shows that there should be a close relationship between the employment effects of raising the minimum wage and the migration of low-skilled workers.

When the demand for local low-skilled labor is relatively unresponsive (or inelastic) to wage changes, raising the minimum wage should lead to an influx of low-skilled workers from other states in search of better-paying jobs. On the other hand, if the demand for low-skilled labor is relatively responsive (or elastic), raising the minimum wage will lead low-skilled workers to flee to states where they will more easily find employment.

To test the model empirically, Monras examined data from all the changes in effective state minimum wages over the period 1985 to 2012. Looking at time frames of three years before and after each minimum wage increase, Monras found that

  1. As depicted in the graph below on the left, those who kept their jobs earned more under the minimum wage. No surprise there.
  2. As depicted in the graph below on the right, workers with the fewest skills were having an easier time finding full-time employment prior to the minimum wage increase. But this trend completely reversed as soon as the minimum wage was increased.
  3. A control group of high-skilled workers didn’t experience either of these effects. Those affected by the changing laws were the least skilled and the most vulnerable.

These results show that the timing of minimum wage increases is not random.

Instead, policy makers tend to raise minimum wages when low-skilled workers’ real wages are declining and employment is rising. Many studies, misled by the assumption that the timing of minimum wage increases is not influenced by local labor demand, have interpreted the lack of falling low-skilled employment following a minimum wage increase as evidence that minimum wage increases have no effect on employment.

When Monras applied this same false assumption to his model, he got the same result. However, to observe the true effect of minimum wage increases on employment, he assumed a counterfactual scenario where, had the minimum wages not been raised, the trend in low-skilled employment growth would have continued as it was.

By making this comparison, Monras was able to estimate that wages increased considerably following a minimum wage hike, but employment also fell considerably. In fact, employment fell more than wages rose. For every 1 percent increase in wages, the share of a state’s population of low-skilled workers in full-time employment fell by 1.2 percent. (The same empirical approach showed that minimum wage increases had no effect on the wages or employment of a control group of high-skilled workers.)

Monras’s model predicts that if labor demand is sensitive to wage changes, low-skilled workers should leave states that increase their minimum wages — and that’s exactly what his empirical evidence shows.

According to Monras,

A 1 percent reduction in the share of employed low-skilled workers [following a minimum wage increase] reduces the share of low-skilled population by between .5 and .8 percent. It is worth emphasizing that this is a surprising and remarkable result: workers for whom the [minimum wage] policy was designed leave the states where the policy is implemented.

These new and important findings reinforce the view that minimum wage increases come at a cost to the employment rates of low-skilled workers.

They also pose a difficult question for minimum wage proponents: If minimum wage increases benefit low-skilled workers, why do these workers leave the states that raise their minimum wage?

Corey IaconoCorey Iacono

Corey Iacono is a student at the University of Rhode Island majoring in pharmaceutical science and minoring in economics.

Tech Sector Bears Brunt of Capital Taxes, Random Regulation by Dan Gelernter

According to our president’s final State of the Union, we’ve recovered from the economic crisis and now enjoy the strongest, most durable economy in the world. Obama does acknowledge that startups and small businesses may need some help, so he wants to reignite our “sprit of innovation” — which he plans to do by putting Vice President Biden in charge of curing cancer.

But the problem facing startups is not a lack of innovation. We are being killed by the economy, which, for those of us who have to live in it, is not good at all. Young entrepreneurs may have spent last year working hard, innovating and building, only to find their companies are worth less now than when they started.

The market is adjusting downwards. Valuations are sinking. The investors I’ve spoken to feel the Fed’s free-money policy has created a dangerous over-valuation of companies and stocks and, now that the rates are coming back up, the air is being let out. 2015, they say, was a tough year because we knew this was coming. 2016 is going to be even tougher.

There is something else weighing on the minds of entrepreneurs and investors alike — regulatory uncertainty. No startup can deal with compliance by itself — not even software companies with no physical products to sell. Startups have to hire lawyers and compliance experts to help them, and this is money we’re not spending on product development or marketing or making our prices more competitive.

The way Obamacare is being implemented, for example, makes our hair white. The rules seem to change with bureaucratic whim; various parts of the law are suspended by executive order. How will we comply next year, and what will it cost? Nobody knows.

In the meantime, the Democratic candidates for President are proposing large hikes to the capital gains tax, which increases effective risk for investors and depresses valuations. Will these hikes ever take place? We don’t know, and that uncertainty carries an additional price.

We’re already seeing more investors decide to weather the storm on the sidelines, keeping an eye on their current affairs and declining to invest in companies they would have snapped up a year ago. A tech startup with a working product will find it harder to raise money today than it would have two years before with nothing but a concept. Not only are we faced with a weak market now, the trend is even more disturbing.

The problem is easier to diagnose than to repair. As an entrepreneur, I’d like to see less regulation and lower taxes. And not just lower taxes on the companies themselves, but on the people who can afford to invest in them. This may come as a surprise, but it’s the hated “one percent” that invests in startups and helps entrepreneurs’ dreams come true. When taxes cut deeper into the pockets of the wealthy, it most negatively affects us — the entrepreneurs and the people we would have hired — not the wealthy.

Regulation remains erratic, and the policies of the next administration cannot be foreseen. 2016 is going to be a hard year for the startup. Investments will continue to decline until investors see a stable market. And they’re not looking at one right now. Companies will die as a result, and not for lack of innovative ideas.

Dan Gelernter

Dan Gelernter is CEO of the technology startup Dittach.

The House That Uncle Sam Built by Peter J. Boettke & Steven Horwitz

The Great Recession (or the Great Hangover) that began in 2008 did not have to happen. Its causes and consequences are not mysterious. Indeed, this particular and very painful episode affirms what the best nonpartisan economists have tried to tell our politicians and policy-makers for decades, namely, that the more they try to inflate and direct the economy, the more damage the rest of us will suffer sooner or later. Hindsight is always 20-20, but in this instance, good old-fashioned common sense would have provided all the foresight needed to avoid the mess we’re in.

In this essay, originally published December 2009, we trace the path of the recession from its origins in the housing market bubble to the policies offered to cure the aftermath.

Download the PDF.

Listen to the audio file (MP3).


Introduction

The theme of “The House that Uncle Sam Built: The Untold Story of the Great Recession of 2008” is that government policy, not a failure of free markets, caused the economic trauma we have been experiencing. We do not live in a free market. We live in a mixed economy. The mixture varies by industry. Technology is primarily free. Financial Services is primarily government. It is not surprising that the most government regulated and controlled segment of the economy, financial services, experienced the biggest problems. These problems were created by actions by the Federal Reserve combined with government housing policy (especially the government- sponsored enterprises – Freddie Mac and Fannie Mae). Misguided government interference in the market is the real culprit in laying the foundation for the Great Recession.

This paper provides a “common sense” and understandable outline of fundamental causes and cures. The analysis is based on long proven economic laws. Despite the wishes and hopes of politicians, economic laws are just as immutable as the laws of physics. If you jump off a ten story building, hitting the ground will not be pleasant. If the Federal Reserve holds interest rates below the natural market rate by rapidly expanding the money supply (“printing” money) as Alan Greenspan did, individuals and businesses will make bad investment decisions and there will be negative consequences to our long term economic well-being. There are no free lunches.

When a doctor misdiagnoses a disease, his treatment will likely make the patient sicker. If we misdiagnose the causes of the Great Recession, our treatment will reduce our long term standard of living. While the U.S. economic system is highly resilient, and we will likely have some form of economic recovery, almost every significant government policy action taken in response to the Great Recession will reduce the quality of life in the long term. Understanding that failed government policies, not market failure, caused our economic challenges is critical to defining the appropriate cures. Since government created the problem, i.e. caused the disaster, it is irrational to believe that more government is the cure. We owe it to ourselves and to our children and grandchildren to take these issues very seriously.

John Allison, Chairman, BB&T

The House That Uncle Sam Built

The man who parties like there is no tomorrow puts his body through an “up” and a “down” course that looks a lot like the business cycle. At the party, the man freely imbibes. He has a great time before stumbling home at 2:00 a.m., where he crashes on the sofa. A few hours later, he awakens in the grip of the dreaded hang- over. He then has a choice to make: get a short-term lift from another drink or sober up. If he chooses the latter and endures a few hours of discomfort, he can recover. In any event, no one would say the hangover is when the harm is done; the harm was done the night before and the hangover is the evidence.

The Great Recession (or the Great Hangover) that began in 2008 did not have to happen. Its causes and consequences are not mysterious. Indeed, this particular and very painful episode affirms what the best nonpartisan economists have tried to tell our politicians and policy-makers for decades, namely, that the more they try to inflate and direct the economy, the more damage the rest of us will suffer sooner or later. Hindsight is always 20-20, but in this instance, good old-fashioned common sense would have provided all the foresight needed to avoid the mess we’re in.

In this essay, we trace the path of the recession from its origins in the housing market bubble to the policies offered to cure the aftermath.

There is no better way to understand a crisis that began in the housing sector than to begin by thinking about a house.

A house must be built on a firm, sustainable foundation. If it’s slapped together with good intentions but lousy materials and workmanship, it will collapse prematurely. If too much lumber and too many bricks are piled on top of a weak support structure, or if housing material is misallocated throughout the house, then an apparently solid structure can crumble like sand once its weaknesses are exposed. Americans built and bought a lot of houses in the past decade not, it turns out, for sound reasons or with solid financing. Why this occurred must be part of any good explanation of the Great Recession.

But isn’t home ownership a great thing, the very essence of the vaunted “American Dream”? In the wealthiest country in the world, shouldn’t everyone be able to own their own home? What could be wrong with any policy that aims to make housing more affordable? Well, we may wish it were not so, but good intentions cannot insulate us from the consequences of bad policies.

Politicians became so enthralled with home ownership and affordable housing – and the points they could score by claiming to be their champions – that they pushed and shoved the economy down an artificial path that invited an inevitable (and painful) correction. Congress created massive, government-sponsored enterprises and then encouraged them to degrade lending standards. Congress bent tax law to favor real estate over other investments. Through its reckless easy money policies, another creation of Congress, the Federal Reserve, flooded the economy with liquidity and drove interest rates down. Each of these policies encouraged too many of the economy’s resources to be drawn into the housing sector. For a substantial part of this decade, our policy-makers in Washington were laying a very poor foundation for economic growth.

Was Free Enterprise the Villain?

Call it free enterprise, capitalism or laissez faire – blaming supposedly unfettered markets for every economic shock has been the monotonous refrain of conventional wisdom for a hundred years. Among those making such claims are politicians who posture as our rescuers, bureaucrats who are needed to implement the rescue plans and special interests who get rescued. Then there are our fellow academics – the ones who add a veneer of respectability – trumpeting the “stimulus” the rest of us get from being rescued.

Rarely does it occur to these folks that government intervention might be the cause of the problem. Yet, we have the Federal Reserve System’s track record, thousands of pages of financial regulations, and thousands more pages of government housing policy that demonstrate the utter absence of “laissez faire” in areas of the economy central to the current recession.

Understanding recessions requires knowing why lots of people make the same kinds of mistakes at the same time. In the last few years, those mistakes were centered in the housing market, as many people overestimated the value of their houses or imagined that their value would continue to rise. Why did everyone believe that at the same time? Did some mysterious hysteria descend upon us out of nowhere? Did people suddenly become irrational? The truth is this: People were reacting to signals produced in the economy. Those signals were erroneous. But it was the signals and not the people themselves that were irrational.

Imagine we see an enormous rise in the number of traffic accidents in a major city. Cars keep colliding at intersections as drivers all seem to make the same sorts of mistakes at once. Is the most likely explanation that drivers have irrationally stopped paying attention to the road, or would we suspect that something might be wrong with the traffic lights? Even with completely rational drivers, malfunctioning traffic signals will lead to lots of accidents and appear to be massive irrationality.

Market prices are much like traffic signals. Interest rates are a key traffic signal. They reconcile some people’s desire to save – delay consumption until a future date – with others’ desire to invest in ideas, materials or equipment that will make them and their businesses more productive. In a market economy, interest rates change as tastes and conditions change. For instance, if people become more interested in future consumption relative to current consumption, they will increase the amount they save. This, in turn, will lower interest rates, allowing other people to borrow more money to invest in their businesses. Greater investment means more sophisticated production processes, which means more goods will be available in the future. In a normally functioning market economy, the process ensures that savings equal investment, and both are consistent with other conditions and with the public’s underlying preferences.

As was made all too obvious in 2008, ours is not a normally functioning market economy. Government has inserted itself into almost every transaction, manipulating and distorting price signals along the way. Few interventions are as momentous as those associated with monetary policy implemented by the Federal Reserve. Money’s essence is that it is a generally accepted medium of exchange, which means that it is half of every act of buying and selling in the economy. Like blood circulating in the body, it touches everything. When the Fed tinkers with the money supply, it affects not just one or two specific markets, like housing policy does, but every single market in the entire economy. The Fed’s powers give it an enormous scope for creating economic chaos.

When central banks like the Federal Reserve inflate, they provide banks with more money to lend, even though the public has not provided any more savings. Banks respond by lowering interest rates to draw in new borrowers. The borrowers see the lower interest rate and believe that it signals that consumers are more interested in delayed consumption relative to immediate consumption. Borrowers then begin to invest in those longer-term projects, which are now relatively more desirable given the lower interest rate. The problem, however, is that the demand for those longer-term projects is not really there. The public is not more interested in future consumption, even though the interest rate signals suggest otherwise. Like our malfunctioning traffic signals, an inflation-distorted interest rate is going to cause lots of “accidents.” Those accidents are the mistaken investments in longer-term production processes.

“I want to roll the dice a little bit more in this situation toward subsidized housing.” – Barney Frank, 2003

Eventually those producers engaged in the longer processes find the cost of acquiring their raw materials to be too high, particularly as it becomes clear that the public’s willingness to defer consumption until the future is not what the interest rate suggested would be forthcoming. These longer-term processes are then abandoned, resulting in falling asset prices (both capital goods and financial assets, such as the stock prices of the relevant companies) and unemployed labor in sectors associated with the capital goods industries.

So begins the bust phase of a monetary policy-induced cycle; as stock prices fall, asset prices “deflate,” overall economic activity slows and unemployment rises. The bust is the economy going through a refitting and reshuffling of capital and labor as it eliminates mistakes made during the boom. The important points here are that the artificial boom is when the mistakes were made, and it is during the bust that those mistakes are corrected.

From 2001 to about 2006, the Federal Reserve pursued the most expansionary monetary policy since at least the 1970s, pushing interest rates far below their natural rate. In January of 2001 the federal funds rate, the major interest rate that the Fed targets, stood at 6.5%. Just 23 months later, after 12 successive cuts, the rate stood at a mere 1.25% – more than 80% below its previous level. It stayed below 2% for two years then the Fed finally began raising rates in June of 2004. The rate was so low during this period that the real Federal Funds rate – the nominal rate minus the rate of inflation – was negative for two and a half years. This meant that, in effect, banks were being paid to borrow money! Rapidly climbing after mid-2004, the rate was back up to the 5% mark by May of 2006, just about the time that housing prices started their collapse. In order to maintain that low Fed Funds rate for that five year period, the Fed had to increase the money supply significantly. One common measure of the money supply grew by 32.5%. A lot of economically irrational investments were made during this time, but it was not because of “irrational exuberance brought on by a laissez-faire economy,” as some suggested. It is unlikely that lots of very similar bad investments are the resut of mass irrationality, just as large traffic accidents are more likely the result of malfunctioning traffic signals than lots of people forgetting how to drive overnight. They resulted from malfunctioning market price signals due to the Fed’s manipulation of money and credit. Poor monetary policy by an agency of government is hardly “laissez faire”.

What About Housing?

With such an expansionary monetary policy, the housing market was sent contradictory and incorrect signals. On one hand, housing and housing-related industries were given a giant green light to expand. It is as if the Fed supplied them with an abundance of lumber, and encouraged them to build their economic house as big as they pleased.

This would have made sense if the increased supply of lumber (capital) had been supported by the public’s desire to increase future consumption relative to immediate consumption – in other words, if the public had truly wanted to save for the bigger house. But the public did not. Interest rates were not low because the public was in the mood to save; they were low because the Fed had made them so by fiat. Worse, Fed policy gave the would-be suppliers of capital – those who might have been tempted to save – a giant red light. With rates so low, they had no incentive to put their money in the bank for others to borrow.

So the economic house was slapped together with what appeared to be an unlimited supply of lumber. It was built higher and higher, drawing resources from the rest of the economy. But it had no foundation. Because the capital did not reflect underlying consumer preferences, there was no support for such a large house. The weaknesses in the foundation were eventually exposed and the 70-story skyscraper, built on a foundation made for a single-family home, began to teeter. It eventually fell in the autumn of 2008.

But why did the Fed’s credit all flow into housing? It is true that easy credit financed a consumer-borrowing binge, a mergers-and-acquisitions binge and an auto binge. But the bulk of the credit went to housing. Why? The answer lies in government’s efforts to increase the affordability of housing.

Government intervention in the housing market dates back to at least the Great Depression. The more recent government initiatives relevant to the current recession began in the Clinton administration. Since then, the federal government has adopted a variety of policies intended to make housing more affordable for lower and middle income groups and various minorities. Among the government actions, those dealing with government-sponsored enterprises active in mortgage markets were central. Fannie Mae (the Federal National Mortgage Association) and Freddie Mac (Federal Home Loan Mortgage Corporation) are the key players here. Neither Fannie nor Freddie are “free-market” firms. They were chartered by the federal government, and although nominally privately owned until the onset of the bust in 2008, they were granted a number of government privileges in addition to carrying an implicit promise of government support should they ever get into trouble.

Fannie and Freddie did not actually originate most of the bad loans that comprised the housing crisis. Loans were made by banks and mortgage companies that knew they could sell those loans in the secondary mortgage market where Fannie and Freddie would buy and repackage them to sell to other investors. Fannie and Freddie also invented a number of the low down-payment and other creative, high-risk types of loans that came into use during the housing boom. The loan originators were willing to offer these kinds of loans because they knew that Fannie and Freddie stood ready to buy them up. With the implicit promise of government support behind them, the risk was being passed on from the originators to the taxpayers. If homeowners defaulted, the buyers of the mortgages would be harmed, not the originators. The presence of Fannie and Freddie in the mortgage market dramatically distorted the incentives for private actors such as the banks.

The Fed’s low interest rates, combined with Fannie and Freddie’s government-sponsored purchases of mortgages, made it highly and artificially profitable to lend to anyone and everyone. The banks and mortgage companies didn’t need to be any greedier than they already were. When banks saw that Fannie and Freddie were willing to buy virtually any loan made to under-qualified borrowers, they made a lot more of them. Greed is no more to blame for these bad mortgages than gravity is to blame for plane crashes. Gravity is always present, just like greed. Only the Federal Reserve’s easy money policy and Congress’ housing policy can explain why the bubble happened when it did, where it did.

Of further significance is the fact that Fannie and Freddie were under great political pressure to keep housing increasingly affordable (while at the same time promoting instruments that depended on the constantly rising price of housing) and to extend opportunities to historically “under-served” groups. Many of the new mortgages with low or even zero-down payments were designed in response to this pressure. Not only were lots of funds available to lend, and not only was government implicitly subsidizing the purchase of mortgages, but it was also encouraging lenders to find more borrowers who previously were thought unable to afford a mortgage.

Partnerships among Fannie and Freddie, mortgage companies, community action groups and legislators combined to make mortgages available to many people who should never have had them, based on their income and assets. Throw in the effects of the Community Reinvestment Act, which required lenders to serve under-served groups, and zoning and land-use laws that pushed housing into limited space in the suburbs and exurbs (driving up prices in the process) and you have the ingredients of a credit-fueled and regulatory-directed housing boom and bust.

All told, huge amounts of wealth and capital poured into producing houses as a result of these political machinations. The Case-Shiller Index clearly shows unprecedented increases in home prices prior to the bust in 2008. From 1946-1996, there had been no significant growth in the price of residential real estate. In contrast, the decade that followed saw skyrocketing prices.

It’s worth noting that even tax policy has been biased toward fostering investments in housing. Real estate investments are taxed at a much lower rate than other investments. Changes in the 1990s made it possible for families to pocket any capital gains (income from price appreciation) on their primary residences up to $500,000 every two years. That translates into an effective rate of 0% versus the ordinary income tax rates that apply to capital gains on other forms of investment. The differential tax treatment of capital gains made housing a relatively better investment than the alternatives. Although tax cuts are desirable for promoting economic growth, when politicians tinker with the tax code to favor the sorts of investments they think people should make, we should not be surprised if market distortions result.

Former Fed chair Alan Greenspan had made it clear that the Fed would not stand idly by whenever a crisis threatened to cause a major devaluation of financial assets. Instead, it would respond by providing liquidity to stem the fall. Greenspan declared there was little the Fed could do to prevent asset bubbles but that it could always cushion the fall when those bubbles burst. By 1998, the idea that the Fed would always bail out investors after a burst bubble had become known as the “Greenspan Put.” (A “put” is a financial arrangement where a buyer acquires the right to re-sell the asset at a pre-set price.) Having seen the Fed bailout investors this way in a series of events starting as early as the 1987 stock market crash and extending through 9/11, players in the housing market had every reason to expect that if the value of houses and other instruments they were creating should fall, the Fed would bail them out, too. The Greenspan Put became yet another government “green light,” signaling investors to take risks they might not otherwise take.

As housing prices began to rise, and in some areas rise enormously, investors saw opportunities to create new financial instruments based on those rising housing prices. These instruments constituted the next stage of the boom in this boom-bust cycle, and their eventual failure became the major focus of the bust.

Fancy Financial Instruments – Cause or Symptom?

Banks and other players in the financial markets capitalized on the housing boom to create a variety of new instruments. These new instruments would enrich many but eventually lose their value, bringing down several major companies with them. They were all premised on the belief that housing prices would continue to rise, which would enable people who had taken out the new mortgages to continue to be able to pay.

Mortgages with low or even nonexistent down payments appeared. The ownership stake the borrower had in the house was largely the equity that came from the house increasing in value. With little to no equity at the start, the amount borrowed and therefore the monthly payments were fairly high, meaning that should the house fall in value, the owner could end up owing more on the house than it was worth.

“If it ain’t broke, why do you want to fix it? Have the GSEs ever missed their housing goals?” – Maxine Waters, 2003

The large flow of mortgage payments resulting from the inflation-generated housing bubble was then converted into a variety of new investment vehicles. In the simplest terms, financial institutions such as Fannie and Freddie began to buy up these mortgages from the originating banks or mortgage companies, package them together and sell the flow of payments from that package as a bond-like instrument to other investors. At the time of their nationalization in the fall of 2008, Fannie and Freddie owned or controlled half of the entire mortgage market. Investors could buy so-called “mortgage-backed securities” and earn income ultimately derived from the mortgage payments of the homeowners. The sellers of the securities, of course, took a cut for being the intermediary. They also divided up the securities into “tranches” or levels of risk. The lowest risk tranches paid off first, as they were representative of the less risky of the mortgages backing the security. The high risk ones paid off with the leftover funds, as they reflected the riskier mortgages.

Buyers snapped up these instruments for a variety of reasons. First, as housing prices continued to rise, these securities looked like a steady source of ever-increasing income. The risk was perceived to be low, given the boom in the housing market. Of course that boom was an illusion that eventually revealed itself.

Second, most of these mortgage- backed securities had been rated AAA, the highest rating, by the three ratings agencies: Moody’s, Standard and Poor’s, and Fitch. This led investors to believe these securities were very safe. It has also led many to charge that markets were irrational. How could these securities, which were soon to be revealed as terribly problematic, have been rated so highly? The answer is that those three ratings agencies are a government-created cartel not subject to meaningful competition.

In 1975, the Securities and Exchange Commission decided only the ratings of three “Nationally Recognized Statistical Rating Organizations” would satisfy the ratings requirements of a number of government regulations.Their activities since then have been geared toward satisfying the demands of regulators rather than true competition. If they made an error in their ratings, there was no possibility of a new entrant coming in with a more accurate technique. The result was that many instruments were rated AAA that never should have been, not because markets somehow failed due to greed or irrationality, but because government had cut short the learning process of true market competition.

Third, changes in the international regulations covering the capital ratios of commercial banks made mortgage-backed securities look artificially attractive as investment vehicles for many banks. Specifically, the Basel accord of 1988 stipulated that if banks held securities issued by government-sponsored entities, they could hold less capital than if they held other securities, including the very mortgages they might originate. Banks could originate a mortgage and then sell it to Fannie Mae. Fannie would then package it with other mortgages into a mortgage-backed security. If the very same bank bought that security (which relied on income from the mortgage it originated), it would be required to hold only 40 percent of the capital it would have had to hold if it had just kept the original mortgage.

These rules provided a powerful incentive for banks to originate mortgages they knew Fannie or Freddie would buy and securitize. The mortgages would then be available to buy back as part of a fancier instrument. The regulatory structure’s attempt at traffic signals was a flop. Markets themselves would not have produced such persistently bad signals or such a horrendous outcome. Once these securities became popular investment vehicles for banks and other institutions (thanks mostly to the regulatory interventions that created and sustained them) still other instruments were built on top of them. This is where “credit default swaps” and other even more complex innovations come into the story. Credit default swaps were a form of insurance against the mortgage-backed securities failing to pay out. Such arrangements would normally be a perfectly legitimate form of risk reduction for investors but given the house of cards that the underlying securities rested on, they likely accentuated the false “traffic signals” the system was creating.

“I set an ambitious goal. It’s one that I believe we can achieve. It’s a clear goal, that by the end of this decade we’ll increase the number of minority homeowners by at least 5.5 million families. Some may think that’s a stretch. I don’t think it is. I think it is realistic. I know we’re going to have to work together to achieve it. But when we do, our communities will be stronger and so will our economy. Achieving the goal is going to require some good policies out of Washington. And it’s going to require a strong commitment from those of you involved in the housing industry.” – President George W. Bush, 2002

By 2006, the Federal Reserve saw the housing bubble it had been so instrumental in creating and moved to prick it by reversing monetary policy. Money and credit were constricted and interest rates were dramatically raised. It would be only a matter of time before the bubble burst.

Deregulation, a False Culprit

It is patently incorrect to say that “deregulation” produced the current crisis [See Appendix A]. While it is true that new instruments such as credit default swaps were not subject to a great deal of regulation, this was mostly because they were new. Moreover, their very existence was an unintended consequence of all the other regulations and interventions in the housing and financial markets that had taken place in prior decades. The most notable “deregulation” of financial markets that took place in the 10 years prior to the crash of 2008 was the passing during the Clinton administration of the Gramm-Leach-Bliley Act in 1999, which allowed commercial banks, investment banks and securities firms to merge in whatever manner they wished, eliminating regulations dating from the New Deal era that prevented such activity. The effects of this Act on the housing bubble itself were minimal. Yet, its passage turned out to be helpful, not harmful, during the 2008 crisis because failing investment banks were able to merge with commercial banks and avoid bankruptcy.

The housing bubble ultimately had to come to an end, and with it came the collapse of the instruments built on top of it. Inflation-financed booms end when the industries being artificially stimulated by the inflation find it increasingly difficult to buy the inputs they need at prices that are profitable and also find it increasingly difficult to find buyers for their outputs. In late 2006, housing prices topped out and began to fall as glutted markets and higher input prices due to the previous years’ race to build began to take their toll.

Falling housing prices had two major consequences for the economy. First, many homeowners found themselves in trouble with their mortgages. The low- or no-equity mortgages that had enabled so many to buy homes on the premise that prices would keep rising now came back to bite them. The falling value of their homes meant they owed more than the homes were worth. This problem was compounded in some cases by adjustable rate mortgages with low “teaser” rates for the first few years that then jumped back to market rates. Many of these mortgages were on houses that people hoped to “flip” for an investment profit, rather than on primary residences. Borrowers could afford the lower teaser payments because they believed they could recoup those costs on the gain in value. But with the collapse of housing prices underway, these homes could not be sold for a profit and when the rates adjusted, many owners could no longer afford the payments. Foreclosures soared.

Second, with housing prices falling and foreclosures rising, the stream of payments coming into those mortgage-backed securities began to dry up. Investors began to re-evaluate the quality of those securities. As it became clear that many of those securities were built upon mortgages with a rising rate of default and homes with falling values, the market value of those securities began to fall. The investment banks that held large quantities of securities were forced to take significant paper losses. The losses on the securities meant huge losses for those that sold credit default swaps, especially AIG. With major investment banks writing down so many assets and so much uncertainty about the future of these firms and their industry, the flow of credit in these specific markets did indeed dry up. But these markets are only a small share of the whole commercial banking and finance sector. It remains a matter of much debate just how dire the crisis was come September. Even if it was real, however, the proper course of action was to allow those firms to fail and use standard bankruptcy procedures to restructure their balance sheets.

“I think this is a case where Fannie and Freddie are fundamentally sound, that they are not in danger of going under.” – Barney Frank, 2008

The Recession is the Recovery

The onset of the recession and its visible manifestations in rising unemployment and failing firms led many to call for a “recovery plan.” But it was a misguided attempt to “plan” the monetary system and the housing market that got us into trouble initially. Furthermore, recession is the process by which markets recover. When one builds a 70-story skyscraper on a foundation made for a small cottage, the building should come down. There is no use in erecting an elaborate system of struts and supports to keep the unsafe structure aloft. Unfortunately, once the weaknesses in the U.S. economic structure were exposed, that is exactly what the Federal government set about doing.

One of the major problems with the government’s response to the crisis has been the failure to understand that the bust phase is actually the correction of previous errors. When firms fail and workers are laid off, when banks reconsider the standards by which they make loans, when firms start (accurately) recording bad investments as losses, the economy is actually correcting for previous mistakes. It may be tempting to try to keep workers in the boom industries or to maintain investment positions, but the economy needs to shift its focus. Corrections must be permitted to take their course. Otherwise, we set ourselves up for more painful downturns down the road. (Remember, the 2008 crisis came about because the Federal Reserve did not want the economy to go through the painful process of reordering itself following the collapse of the dot.com bubble.) Capital and labor must be reallocated, expectations must adjust, and the economic system must accommodate the existing preferences of consumers and the real resource constraints that producers face. These adjustments are not pleasant; they are in fact often extremely painful to the individuals who must make them, but they are also essential to getting the system back on track.

When government takes steps to prevent the adjustment, it only prolongs and retards the correction process. Government policies of easy credit produce the boom. Government policies designed to prevent the bust have the potential to transform a market correction into a full-blown economic crisis.

No one wants to see the family business fail, or neighbors lose their jobs, or charitable groups stretched beyond capacity. But in a market economy, bankruptcy and liquidation are two of the primary mechanisms by which resources are reallocated to correct for previous errors in decision-making. As Lionel Robbins wrote in The Great Depression, “If bankruptcy and liquidation can be avoided by sound financing nobody would be against such measures. All that is contended is that when the extent of mal- investment and over indebtedness has passed a certain limit, measures which postpone liquidation only tend to make matters worse.”

Seeing the recession as a recovery process also implies that what looks like bad news is often necessary medicine. For example, news of slackening home sales, or falling new housing starts, or losses of jobs in the financial sector are reported as bad news. In fact, this is a necessary part of recovery, as these data are evidence of the market correcting the mistakes of the boom. We built too many houses and we had too many resources devoted to financial instruments that resulted from that housing boom. Getting the economy right again requires that resources move away from those industries and into new areas. Politicians often claim they know where resources should be allocated, but the Great Recession of 2008 is only the latest proof they really don’t.

The Bush administration made matters worse by bailing out Bear Sterns in the spring of 2008. This sent a clear signal to financial firms that they might not have to pay the price for their mistakes. Then after that zig, the administration zagged when it let Lehman Brothers fail. There are those who argue that allowing Lehman to fail precipitated the crisis. We would argue that the Lehman failure was a symptom of the real problems that we have already outlined. Having set up the expectations that failing firms would get bailed out, the federal government’s refusal to bail out Lehman confused and surprised investors, leading many to withdraw from the market. Their reaction is not the necessary consequence of letting large firms fail, rather it was the result of confusing and conflicting government policies. The tremendous uncertainty created by the Administration’s arbitrary and unpredictable shifts – most notably Bernanke and Paulson’s September 23, 2008 unconvincing testimony on the details of the Troubled Asset Relief program – was the proximate cause of the investor withdrawals that prompted the massive bailouts that came in the fall, including those of Fannie Mae and Freddie Mac.

The Bush bailout program was problematic in at least two ways. First, the rationale for such aggressive government action, including the Fed’s injection of billions of dollars in new reserves, was that credit markets had frozen up and no lending was taking place. Several observers at the time called this claim into question, pointing out that aggregate new lending numbers, while growing much more slowly than in the months prior, had not dropped to zero.

Markets in which the major investment banks operated had indeed slowed to a crawl, both because many of their housing-related holdings were being revealed as mal-investments and because the inconsistent political reactions were creating much uncertainty. The regular commercial banking sector, however, was by and large continuing to lend at prior levels.

More important is this fact: the various bailout programs prolonged the persistence of the very errors that were in the process of being corrected! Bailing out firms that are suffering major losses because of errant investments simply prolongs the mal-investments and prevents the necessary reallocation of resources.

The Obama administration’s nearly $800 billion stimulus package in February of 2009 was also predicated on false premises about the nature of recession and recovery. In fact, these were the same false premises which informed the much-maligned Bush Administration approach to the crisis. The official justification for the stimulus was that only a “jolt” of government spending could revive the economy.

The fallacy of job creation by government was first exposed by the French economist Bastiat in the 19th century with his story of the broken window. Imagine a young boy throws a rock through a window, breaking it. The townspeople gather and bemoan the loss to the store owner. But eventually one notes that it means more business for the glazier. And another observes that the glazier will then have money to spend on new shoes. And then the shoe seller will have money to spend on a new suit. Soon, the crowd convinces them-selves that the broken window is actually quite a good thing.

The fallacy, of course, is that if the window was never broken, the store owner would still have a functioning window and could spend the money on something else, such as new stock for his store. All the breaking of the window does is force the store owner to spend money he wouldn’t have had to spend if the window had been left intact. There is no net gain in wealth here. If there was, why wouldn’t we recommend urban riots as an economic recovery program?

When government attempts to “create” a job, it is not unlike a vandal who “creates” work for a glazier. There are only three ways for a government to acquire resources: it can tax, it can borrow or it can print money (inflate). No matter what method is used to acquire the resources, the money that government spends on any stimulus must come out of the private sector. If it is through taxes, it is obvious that the private sector has less to spend, leading to losses that at least cancel out any jobs created by government. If it is through borrowing, that lowers the savings available to the private sector (and raises interest rates in the process), reducing the amount the sector can borrow and the jobs it can create. If it is through printing money, it reduces the purchasing power of private sector incomes and savings. When we add to this the general inefficiency of the heavily politicized public sector, it is quite probable that government spending programs will cost more jobs in the private sector than they create.

“This [Government Sponsored Housing] is one of the great success stories of all time…” Chris Dodd, 2004

The Japanese experience during the 1990s is telling. Following the collapse of their own real estate bubble, Japan’s government launched an aggressive effort to prop up the economy. Between 1992 and 1995, Japan passed six separate spending programs totaling 65.5 trillion yen. But they kept increasing the ante. In April of 1998, they passed a 16.7 trillion yen stimulus package. In November of that year, it was an additional 23.9 trillion. Then there was an 18 trillion yen package in 1999 and an 11 trillion yen package in 2000. In all, the Japanese government passed 10 (!) different fiscal “stimulus” packages, totaling more than 100 trillion yen. Despite all of these efforts, the Japanese economy still languishes. Today, Japan’s debt-to-GDP ratio is one of the highest in the industrialized world, with nothing to show for it. This is not a model we should want to imitate.

It is also the same mistake the United States made in the Great Depression, when both the Hoover and Roosevelt Administrations attempted to fight the deepening recession by making extensive use of the federal government and only made matters worse. In addition to the errors made by the Federal Reserve System that exacerbated the downturn that it created with inflationary policies in the 1920s, Hoover himself tried to prevent a necessary fall in wages by convincing major industrialists to not cut wages, as well as proposing significant increases in public works and, eventually, a tax increase. All of these worsened the depression.

Roosevelt’s New Deal continued this set of policy errors. Despite claims during the current recession that the New Deal saved us from economic disaster, recent scholarship has solidly affirmed that the New Deal didn’t save the economy. Policies such as the Agricultural Adjustment Act and the National Industrial Recovery Act only interfered with the market’s attempts to adjust and recover, prolonging the crisis. Later policies scared off private investors as they were uncertain about how much and in what ways government would step in next. The result was that six years into the New Deal, unemployment rates were still above 17% and GDP per capita was still well below its long-run trend.

In more recent years, President Nixon’s attempt to fight the stagflation of the early 1970s with wage and price controls was abandoned quickly when they did nothing to help reduce inflation or unemployment. Most telling for our case was the fact that the Fed’s expansionary policies earlier this decade were intended to “soften the blow” of the dot.com bust in 2001. Of course those policies gave us the inflationary boom that produced the crisis that began in 2008. If the current recession lingers or becomes a second Great Depression, it will not be because of problems inherent in markets, but because the political response to a politically generated boom and bust has prevented the error-correction process from doing its job. The belief that large-scale government intervention is the key to getting us out of a recession is a myth disproven by both history and recent events.

The Future That Awaits Our Children

Commentators have had a field day adding up the trillions of dollars that have been committed in the Bush bailout, the Obama stimulus, and the administration’s proposed budget for 2010. The explosion of spending and debt, whatever the final tab, is unprecedented by any measure. It will “crowd out” a significant portion of private investment, reducing growth rates and wages in the future. We are, in effect, reducing the income of our children tomorrow to pay for the bills of today and yesterday. Large government debt is also a temptation for inflation. In order for governments to borrow, someone must be willing to buy their bonds. Should confidence in a government fall enough (China, notably, has expressed some reluctance to continue buying our debt), it is possible that buyers will be hard to come by. That puts pressure on the government’s monetary authorities to “lubricate” the system by creating new money and credit from thin air.

So, even if the economy gets a lift in the near-term from either its own corrective mechanisms or from the government’s reinflation of money and credit, we have not recovered from the hangover. More of what caused the Great Recession of 2008 – easy money, regulatory interventions to direct capital in unsustainable directions, politicians and policy-makers rigging financial markets – is not likely to produce anything but the same outcome; asset price inflation and an eventual “adjustment” we call a recession or depression. Along the way, we will accumulate monumental debts which accentuate the future downturn and saddle us with new burdens.

Unless we can begin to undo the mistakes of the last decade or more, the future that awaits our children will be one that is poorer and less free than it should have been. With politicians mortgaging future generations to the tune of trillions, running and subsidizing auto and insurance companies, spending blindly and printing money hand- over-fist – all while blaming free enterprise for their own errors, we have a great deal to learn.

As Albert Einstein famously said, doing the same thing over and over again and expecting different results is the definition of insanity. The best we can hope for is that we learn the right lessons from this crisis. We cannot afford to repeat the wrong ones.

“The basic point is that the recession of 2001 wasn’t a typical postwar slump…. To fight this recession the Fed needs more than a snapback… Alan Greenspan needs to create a housing bubble to replace the Nasdaq bubble.” Paul Krugman, 2002

Appendix A: The Myth of Deregulation

Appendix B: Government Interventions During Crisis Create Uncertainty

Appendix C: Suggested Readings

Cole, Harold and Lee E. Ohanian. 2004 New Deal Policies and the Persistence of the Great Depression: A General Equilibrium Analysis, Journal of Political Economy 112: 779-816.

Friedman, Jeffrey. 2009. A Crisis of Politics, Not Economics: Complexity, Ignorance, and Policy Failure, Critical Review 21: 127-183.

Higgs, Robert. 2008. Credit Is Flowing, Sky Is Not Falling, Don’t Panic, The Beacon, available at http://www.independent.org/blog/?p=201.

Marenzi, Octavio. 2008. Flawed Assumptions about the Credit Crisis: A Critical Examination of US Policymakers, Celent Research, available at http://www.celent.com/124_347.htm

Prescott, Edward and Timothy J. Kehoe (Editors). 2007. Great Depressions of the Twentieth Century, Minneapolis. Federal Reserve Bank of Minneapolis.

Taylor, John. 2009. Getting Off Track: How Government Actions and Interventions Caused, Prolonged, and Worsened the Financial Crisis, Stanford, CA: Hoover Institution Press.

Woods, Thomas. 2009. Meltdown: A Free-Market Look at Why the Stock Market Collapsed, the Economy Tanked, and Government Bailouts Will Make Things Worse, Washington, DC: Regnery.

Biographies

Lawrence W. Reed is president of the Foundation for Economic Education – www.fee.org – and president emeritus of the Mackinac Center for Public Policy.

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University in Canton, NY. He has been a visiting scholar at Bowling Green State University and the Mercatus Center at George Mason University.

Peter J. Boettke is the Deputy Director of the James M. Buchanan Center for Political Economy, a Senior Research Fellow at the Mercatus Center, and a professor in the economics department at George Mason University.

John Allison served as the Chief Executive Officer of BB&T Corp. until December 2008. Mr Allison has been the Chairman of BB&T Corp., since July 1989. He serves as a Member of American Bankers Association and The Financial Services Roundtable.

pdf file: HouseUncleSamBuiltBooklet (1085597 bytes)

Peter J. BoettkePeter J. Boettke

Peter Boettke is a Professor of Economics and Philosophy at George Mason University and director of the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center. He is a member of the FEE Faculty Network.

RELATED ARTICLE: Housing Policies That Led to 2008 Collapse Still in Place, Says Freddie Mac Economist – PJ Meda June, 2017

Government Caused the ‘Great Stagnation’ by Peter J. Boettke

Tyler Cowen caused quite a stir with his e-book, The Great Stagnation. In properly assessing his work it is important to state explicitly what his argument actually is. Median real income has stagnated since 1980, and the reason is that the rate of technological advance has slowed. Moreover, the technological advances that have taken place with such rapidity in recent history have improved well-being, but not in ways that are easily measured in real income statistics.

Critics of Cowen more often than not miss the mark when they focus on the wild improvements in our real income due to quality improvements (e.g., cars that routinely go over 100,000 miles) and lower real prices (e.g., the amount of time required to acquire the inferior version of yesterday’s similar commodities).

Cowen does not deny this. Nor does Cowen deny that millions of people were made better off with the collapse of communism, the relative freeing of the economies in China and India, and the integration into the global economy of the peoples of Africa and Latin America. Readers of The Great Stagnation should be continually reminded that they are reading the author of In Praise of Commercial Culture and Creative Destruction. Cowen is a cultural optimist, a champion of the free trade in ideas, goods, services and all artifacts of mankind. But he is also an economic realist in the age of economic illusion.

What do I mean by the economics of illusion? Government policies since WWII have created an illusion that irresponsible fiscal policy, the manipulation of money and credit, and expansion of the regulation of the economy is consistent with rising standards of living. This was made possible because of the “low hanging” technological fruit that Cowen identifies as being plucked in the 19th and early 20th centuries in the US, and in spite of the policies government pursued.

An accumulated economic surplus was created by the age of innovation, which the age of economic illusion spent down. We are now coming to the end of that accumulated surplus and thus the full weight of government inefficiencies are starting to be felt throughout the economy. Our politicians promised too much, our government spends too much, in an apparent chase after the promises made, and our population has become too accustomed to both government guarantees and government largess.

Adam Smith long ago argued that the power of self-interest expressed in the market was so strong that it could overcome hundreds of impertinent restrictions that government puts in the way. But there is some tipping point at which that ability to overcome will be thwarted, and the power of the market will be overcome by the tyranny of politics. Milton Friedman used that language to talk about the 1970s; we would do well to resurrect that language to talk about today.

Cowen’s work is a subversive track in radical libertarianism because he identifies that government growth (both measured in terms of scale and scope) was possible only because of the rate of technological improvements made in the late 19th and early 20th century.

We realized the gains from trade (Smithian growth), we realized the gains from innovation (Schumpeterian growth), and we fought off (in the West, at least) totalitarian government (Stupidity). As long as Smithian growth and Schumpeterian growth outpace Stupidity, tomorrow’s trough will still be higher than today’s peak. It will appear that we can afford more Stupidity than we can actually can because the power of self-interest expressed through the market offsets its negative consequences.

But if and when Stupidity is allowed to outpace the Smithian gains from trade and the Schumpeterian gains from innovation, then we will first stagnate and then enter a period of economic backwardness — unless we curtail Stupidity, explore new trading opportunities, or discover new and better technologies.

In Cowen’s narrative, the rate of discovery had slowed, all the new trading opportunities had been exploited, and yet government continued to grow both in terms of scale and scope. And when he examines the 3 sectors in the US economy — government services, education, and health care — he finds little improvement since 1980 in the production and distribution of the services. In fact, there is evidence that performance has gotten worse over time, especially as government’s role in health care and education has expanded.

The Great Stagnation is a condemnation of government growth over the 20th century. It was made possible only by the amazing technological progress of the late 19th and early 20th century. But as the rate of technological innovation slowed, the costs of government growth became more evident. The problem, however, is that so many have gotten used to the economics of illusion that they cannot stand the reality staring them in the face.

This is where we stand in our current debt ceiling debate. Government is too big, too bloated. Washington faces a spending problem, not a revenue problem. But too many within the economy depend on the government transfers to live and to work. Yet the economy is not growing at a rate that can afford the illusion. Where are we to go from here?

Cowen’s work makes us think seriously about that question. How can the economic realist confront the economics of illusion? And Cowen has presented the basic dilemma in a way that the central message of economic realism is not only available for libertarians to see (if they would just look, or listen carefully to his podcast at EconTalk), but for anyone who is willing to read and think critically about our current political and economic situation.

The Great Stagnation signals the end of the economics of illusion and — let’s hope — paves the way for a new age of economic realism.

This post first appeared at Coordination Problem.

Peter J. BoettkePeter J. Boettke

Peter Boettke is a Professor of Economics and Philosophy at George Mason University and director of the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center. He is a member of the FEE Faculty Network.

RELATED ARTICLE: 5 Reasons Why America Is Headed to a Budget Crisis

What Killed Economic Growth? by Jeffrey A. Tucker

Debating why the economy is so sluggish is an American pastime. It fills the op-eds, burns up the blogosphere, consumes the TV pundits, and dominates the political debates.

It’s a hugely important question because many people are seriously frustrated about the problem. The recent popularity of political cranks and crazies from the left and right — backed by crowds embracing nativist and redistributionist nostrums — testify to that.

Sometimes it’s good to look at the big picture. The Economic Freedom of the World report does this with incredible expertise. If you believe in gathering data, and looking just at what the evidence shows and drawing conclusions, you will appreciate this report. It sticks to just what we know and what we can measure. The editors of the report have been doing this since 1996, so the persistence of the appearance of cause and effect is undeniable.

The report seeks measures of five key indicators of economic freedom: security of property rights, soundness of money, size of government, freedom to trade globally, and the extent of regulation. All their measures are transparent and heavily scrutinized by experts on an ongoing basis. If you question how a certain measure was arrived at, you are free to do so. It’s all there, even the fantastically detailed data sets, free for the download.

The report examines 157 countries with data available for 100 countries back to 1980. A total of 42 distinct variables are used in the index.

The big takeaway from this report: freer economies vastly outperform unfree economies by every measure of wellbeing.

The countries in the top quarter of the freest economies have average incomes more than 7 times higher than those countries listed into the bottom quarter (the least free). This is even true for the poor: the average income of the poor in free economies is 6 times that of the average in unfree economies. The lowest income group in free economies still 50% greater than the overall average is least free economies.

Life expectancy is 80.1 years in the top quarter as versus 63.1 in the bottom quarter.

The report further shows that civil liberties are more protected in freer economies than less free economies.

It’s a beautiful thing how this report puts to rest of a century of ideological debates. Indeed, these results are not generated by political ideology. They are generated by facts on the ground, the real conditions of law, regulation, institutions, legislation, and policy.

The implications are screamingly obvious. If you want a country to grow richer, you have to embrace freedom in economic life. If you want to drive a country into poverty, there is a way: grow the government, destroy the money, shut down trade, and heavily regulate all production and consumption.

One leaves this report with the question: Why are we still debating this?

What about the United States?

Everyone knows that the US has a problem. Despite living through the greatest explosion of technology and communication in the history of the world, a transformation that should have set off a wonderful economic boom similar to what we saw in the 19th century, we’ve seen pathetic results in growth and household income.

A quick casual look shows what I mean. Here’s percent change in GDP from the end of World War II to the present.

And here is real median household income from 1984 to 2013:

From those two pictures alone, you can discern the source of voter frustration, and also the general atmosphere of angst.

People want to know why, and whom to blame. The Economic Freedom Index gives you a strong hint.

From 1970 to 2000, the United States was generally listed as the third freest economy in the world, behind only Singapore and Hong. Starting in 2000, the US began to slip. Over the period between 2000 and today, the summary position in the index slipped 0.9%. This doesn’t sound like much, but “a one-point decline in the EFW rating is associated with a reduction in the long-term growth of GDP of between 1.0 and 1.5 percentage points annually,” says the report, and this adds up, year after year.

Relative to other countries, listed most free to least free, the US has slipped from the number 3 spot all the way to number 16. Countries that are ahead of the US include Australia, Chile, Ireland, Canada, Jordan, Taiwan, New Zealand, Hong Kong, and Singapore.

And here is a fact that I found incredible: The former Soviet state of Georgia ranks at number 12. And can you guess which country is just behind the US at number 17? The formerly Communist nightmare of Romania. That Romania is only slightly less free than the United States is great progress for Romanians, but should be an embarrassment for Americans.

The fall in economic freedom in this country has been precipitous. The authors of the report further note that this decline is highly unusual. Most all countries in the world are getting freer, which accounts from the thrilling fall in global poverty.

But the US is going the opposite direction, fast: “Nowhere has the reversal of the rising trend in the economic freedom been more evident than in the United States.”

What in particular accounts for the largest portion of this slide? It’s about the security of property. The drug war, the bailouts, the rise of forced transfers to political elites, eminent domain, and asset forfeiture all contribute. There are other problems with regulation and taxation, but it is the lack of security in what we own that has been decisive. This is what kills investment, confidence in the future, and the ability to accumulate capital that is so essential to prosperity.

What’s strikes me when looking at all this data, and the crystal clear connections here, is the strange silence on the part of the opinion class. People are flailing around for answers. Where’s the growth? Who is stealing the future? Maybe it’s the immigrants, foreign nations, and the rise of inequality. Maybe technology is taking jobs. Maybe people are just lazy and incompetent.

Or maybe we should look at the data. It’s all about freedom.

Jeffrey A. Tucker
Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.

Progressivism Is Illiberal: Modern Liberalism Is at Odds with Peaceful Interaction by Sandy Ikeda

A New York magazine article headline declares, “De Blasio’s Proposal to Destroy Pedestrian Times Square Is the Opposite of Progressive.”

That’s Bill de Blasio, the current mayor of New York City, who was elected in 2013 after running unabashedly as the progressive, socially democratic candidate. I find it interesting that people are surprised by the mayor’s illiberal stands on many (though not all) of the major issues he has faced in his short time in office.

One of the latest is his proposal to return cars to Times Square Plaza, in the heart of Midtown Manhattan, by razing the outdoor space created by the administration of his Republican predecessor, Michael Bloomberg. You see, Mayor Bill says he doesn’t like the goings-on there, which lately include women soliciting topless on the street and people dressed as Elmo hustling tourists. His solution? We can’t control all the hucksterism, so let’s shut the whole thing down!

Justin Davidson, the author of that New York magazine article, says it well:

If de Blasio really believes that the best way to deal with street performers in Times Square is to tear up the pedestrian plaza, may I suggest he try reducing homelessness by eradicating doorways and subway grates?

My point goes beyond Times Square Plaza, of course, although that controversy is instructive, as are others (such as his recent attempt to rein in Uber).

The approach the mayor takes in this and similar matters is characteristic of any political ideology that views unrestrained political power as a legitimate tool of social change. That includes neoconservatism and other modern political ideologies, including progressivism.

While it’s a caricature to say that what progressives would not forbid, they would make mandatory, they show a pattern of using force to ban what they don’t like and of mandating what they do. If you think that sounds illiberal, you’re right. Progressivism isn’t liberalism, especially of the classical variety. But even the watered-down liberalism of campus radicals of the 1960s paid more heed to the principle of tolerance than progressives today do.

Progressivism versus Liberalism

Progressivism today goes beyond the liberal position that, for example, same-sex marriage should have the same legal status as heterosexual marriage, to the belief that the state should threaten physical violence against anyone who refuses to associate or do business with same-sex couples.

Progressives have a low tolerance for opposing points of view. Unfortunately, so do some libertarians, but for the most part libertarians do not endorse using political power to eradicate what they believe are disagreeable public activities. Libertarians are much closer to genuine liberals than progressives are.

To a genuine liberal, tolerance means more than endorsing a wide range of beliefs and practices. It means allowing nonviolent people to say and do things that we strongly disagree with, disapprove of, or find highly offensive. It means not assuming our own moral superiority over the wickedness or stupidity of our ideological opponents. English writer Beatrice Evelyn Hall captured that liberal spirit when she (and not Voltaire) wrote, “I disapprove of what you say, but I will defend to the death your right to say it.”

The plaza and the streets it encompasses were, of course, the creation of government, so we’re not talking about the municipality bulldozing private property. But it’s not the government-created structure the mayor is objecting to; it’s the purely voluntary — “unregulated” — activities going on in it that he doesn’t like and wants to wipe out with heavy hands and hammy fists.

Closing the Gap Economy

The activity in Times Square Plaza is related to what I called in a recent column the “gap economy,” which refers to the unregulated, money-making activities that arise in the free spaces left open by government regulation and that complete with businesses that have adapted themselves to the mixed economy. Progressives like Mayor de Blasio seem to fear what they cannot regulate and control. They don’t understand that in the free market, there is regulation and that the regulatory principle is not coercion but persuasion, competition, and reputation.

Progressives profoundly mistrust the spontaneous, especially when it’s the result of people acting out of self-interest. But that’s the hallmark and the essence of urban life. New York Times architecture critic Michael Kimmelman sees it this way:

Time and again, Mr. de Blasio leaves an impression that he understands very little about the dynamics of urbanism and the physical fabric of the city — its parks and plazas, its open spaces, libraries, transit network and streetscape, which all contribute to issues he cares most about, like equity and social mobility.

He doesn’t understand because he probably thinks in terms of specific, static objectives (such as his so-called “Vision Zero,” which I write about in “Um, Scarcity?”) rather than what Kimmelman rightly refers to as “the dynamics of urbanism.” As the urbanist (and libertarian friendly) Jane Jacobs explained, those dynamics are messy and inherently unpredictable.

It doesn’t seem to matter to the mayor that ordinary people have demonstrated their preference for Times Square Plaza by showing up in record numbers, just as it doesn’t matter that ordinary New Yorkers have gained from gap-economy activities such as Uber or Airbnb. What concerns progressives like the mayor is that it’s not happening the way they want it to happen. (In the case of Uber, thank goodness, the truly liberal elements of New York soundly defeated the progressive forces.)

Davidson writes,

I understand that the mayor doesn’t care for the carnival atmosphere at Times Square — neither do I. But eradicating a pedestrian plaza because you don’t like who’s walking there is like blasting away a beach because you object to bikinis or paving a park because you hate squirrels. It represents such a profound misunderstanding of public space that it makes me question the mayor’s perception of what counts as progressive.

It’s not the mayor Davidson should be questioning so much as the principles that motivate him. De Blasio just happens to illustrate progressivism in a particularly glaring way.

Sandy IkedaSandy Ikeda

Sandy Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism.

RELATED ARTICLE: Lessons Learned From Kim Davis About Religious Liberty and Government Accommodation

Obama Administration Declares War on Franchisors and Subcontractors by Walter Olson

In a series of unilateral moves, the Obama administration has been introducing an entirely new regime of labor law without benefit of legislation, upending decades’ worth of precedent so as to herd as many workers into unions as possible.

The newest, yesterday, from the National Labor Relations Board, is also probably the most drastic yet: in a case against waste hauler Browning-Ferris Industries, the Board declared that from now on, franchisors and companies that employ subcontractors and temporary staffing agencies will often be treated as if they were really direct employers of those other firms’ workforces: they will be held liable for alleged labor law violations at the other workplaces, and will be under legal compulsion to bargain with unions deemed to represent their staff.

The new test, one of “industrial realities,” will ask whether the remote company has the power, even the potential power, to significantly influence working conditions or wages at the subcontractor or franchisee; a previous test sought to determine whether the remote company exercised “ ‘direct and immediate impact’ on the worker’s terms and conditions — say, if that second company is involved in hiring and determining pay levels.”

This is a really big deal; as our friend Iain Murray puts it at CEI, it has the potential to “set back the clock 40 years, to an era of corporate giants when few people had the option of being their own bosses while pursuing innovative employment arrangements.”

  • A tech start-up currently contracts out for janitorial, cafeteria, and landscaping services. It will now be at legal risk should its hired contractors be later found to have violated labor law in some way, as by improperly resisting unionization. If it wants to avoid this danger of vicarious liability, it may have to fire the outside firms and directly hire workers of its own.
  • A national fast-food chain currently employs only headquarters staff, with franchisees employing all the staff at local restaurants. Union organizers can now insist that it bargain centrally with local organizers, at risk for alleged infractions by the franchisees. To escape, it can either try to replace its franchise model with company-owned outlets — so that it can directly control compliance — or at least try to exert more control over franchisees, twisting their arms to recognize unions or requiring that an agent of the franchiser be on site at all times to monitor labor law compliance.

Writes management-side labor lawyer Jon Hyman:

If staffing agencies and franchisors are now equal under the National Labor Relations Act with their customers and franchisees, then we will see the end of staffing agencies and franchises as viable business models.

Moreover, do not think for a second that this expansion of joint-employer liability will stop at the NLRB. The Department of Labor recently announced that it is exploring a similar expansion of liability for OSHA violations. And the EEOC is similarly exploring the issue for discrimination liability.

And Beth Milito, senior legal counsel at the National Federation of Independent Business, quoted at The Hill: “It will make it much harder for self-employed subcontractors to get jobs.”

What will happen to the thriving white-van culture of small skilled contractors that now provides upward mobility to so many tradespeople? Trade it in for a company van, start punching someone’s clock, and just forget about building a business of your own.

What do advocates of these changes intend to accomplish by destroying the economics of business relationships under which millions of Americans are presently employed? For many, the aim is to force much more of the economy into the mold of large-payroll, unionized employers, a system for which the 1950s are often (wrongly) idealized.

One wonders whether many of the smart New Economy people who bought into the Obama administration’s promises really knew what they were buying.

This post first appeared at Cato.org.

Walter Olson
Walter Olson

Walter Olson is a senior fellow at the Cato Institute’s Center for Constitutional Studies.

Obama’s Econ Advisers: Occupational Licensing Is a Disaster by Mikayla Novak

Libertarians received a rare pleasant surprise when President Obamaʼs Council of Economic Advisers issued a report highly critical of occupational licensing.

The report cited numerous problems arising from this increasingly burdensome regulatory practice, which requires ordinary Americans to obtain expensive licenses and permits to perform ordinary jobs.

It is a belated recognition by the administration that government has long been acting against the best interests of workers and consumers.

And it might give us something of a warm inner glow to consider, as the Wall Street Journal recently did, that reforming occupational licensing could catalyze important economic reforms that transcend traditional political and ideological divides.

And reform is vital: each and every day, occupational licensing destroys the ability of individuals to freely and peacefully pursue their own livelihoods.

Licensing hurts workers

Occupational licensing locks countless of people out of dignified and meaningful job opportunities.

The CEA report indicates that more than a quarter of all workers in the United States need a government license or permit to legally work. Two-thirds of the increase in licensing since the 1960s is attributable to an increase in the number of professions being licensed, not to growth within traditionally licensed professions like law or medicine.

The data show that licensed workers earn on average 28 percent more than unlicensed workers. Only some of this observed premium is accounted for by the differences in education, training and experience between the two groups. The rest comes from reducing supply, locking competitors out of the market and extracting higher prices from consumers.

What makes professional licensing so invidious is that it serves as a barrier to entry in the labor market, simply because it takes so much time and money to obtain a license to work.

For young people, immigrants, and low-income individuals, it can be extremely difficult to stump up the cash and find the time — sometimes hundreds or even thousands of hours — to get licensed. The fees to maintain a license can also be exorbitant.

Compounding the problem is that licensing requirements are spreading into more industries, such as construction, food catering, and hairdressing — occupations where it used to be easy to start a career.

Today, there is arguably no more lethal poison for labor market freedom and upward mobility than occupational licensing.

Licensing hurts consumers

Defenders of occupational licensing say that workers need to be licensed because without it consumers would be harmed by poor service.

In the absence of licensing, children will be taught improperly at school, patients won’t get adequate health care in hospital, home owners will not get their leaky sinks fixed, and somebody could fall victim to an improper haircut.

But, in the name of promoting quality, licensing regulations perversely raise costs and reduce choices for consumers.

The CEA concludes that, by imposing entry barriers against potential competitors who could undercut the prices of incumbent suppliers, licensing raises prices for consumers by between 3 and 16 percent.

Moreover, the effect of licensing on product quality is unclear. The report notes that the empirical literature doesn’t demonstrate an increase in quality from licensure.

By restricting supply, licensing dulls the incentive for incumbents to provide the best quality products because the threat of new entrants competing with better offerings is diminished.

Perversely, the inflated prices offered by licensed providers may force some consumers to seek unlicensed providers, or to use less effective substitutes, or to do jobs themselves — in some cases increasing the risk of accidents.

In a blow to the notion of efficient government bureaucracy, the CEA indicates that government licensing boards routinely fail in monitoring licensed providers, contributing to the lack of improvement in quality.

Ending the war on livelihood freedom

To restore a climate friendly to economic liberty, people must feel they have a direct, personal stake in what Deidre McCloskey calls “market-tested betterment” — that is to say, in capitalism.

There is no better way to achieve this than to allow individuals to build their own livelihoods, finding decent jobs serving customers with the goods and services they want, at prices they mutually agree on.

The argument for economic liberty is also grounded in the moral imperative of respecting the freedom of other people to lead their own lives as they see fit, including their right to choose their own livelihood.

Proponents of occupational licensing can always serve up a parade of hypothetical horribles about things that could go wrong if people didn’t need the state’s permission to work, but nothing has been more harmful to workers and consumers than occupational licensing.

Mikayla Novak
Mikayla Novak

Mikayla Novak is a senior researcher for the Institute of Public Affairs, an Australian free market think tank, and holds a doctorate in economics. She specializes in public finance, economic history, and the history of classical liberal thought.

Video Game Developers Face the Final Boss: The FDA by Aaron Tao

As I drove to work the other day, I heard a very interesting segment on NPR that featured a startup designing video games to improve cognitive skills and relieve symptoms associated with a myriad of mental health conditions.

One game, Project Evo, has shown good preliminary results in training players to ignore distractions and stay focused on the task at hand:

“We’ve been through eight or nine completed clinical trials, in all cognitive disorders: ADHD, autism, depression,” says Matt Omernick, executive creative director at Akili, the Northern California startup that’s developing the game.

Omernick worked at Lucas Arts for years, making Star Wars games, where players attack their enemies with light sabers. Now, he’s working on Project Evo. It’s a total switch in mission, from dreaming up best-sellers for the commercial market to designing games to treat mental health conditions.

“The qualities of a good video game, things that hook you, what makes the brain — snap — engage and go, could be a perfect vessel for actually delivering medicine,” he says.

In fact, the creators believe their game will be so effective it might one day reduce or replace the drugs kids take for ADHD.

This all sounds very promising.

In recent years, many observers (myself included) have expressed deep concerns that we are living in the “medication generation,” as defined by the rapidly increasing numbers of young people (which seems to have extended to toddlers and infants!) taking psychotropic drugs.

As experts and laypersons continue to debate the long-term effects of these substances, the news of intrepid entrepreneurs creating non-pharmaceutical alternatives to treat mental health problems is definitely a welcome development.

But a formidable final boss stands in the way:

[B]efore they can deliver their game to players, they first have to go through the Food and Drug Administration — the FDA.

The NPR story goes on to detail on how navigating the FDA’s bureaucratic labyrinth is akin to the long-grinding campaign required to clear the final dungeon from any Legend of Zelda game. Pharmaceutical companies are intimately familiar with the FDA’s slow and expensive approval process for new drugs, and for this reason, it should come as no surprise that Silicon Valley companies do their best to avoid government regulation. One venture capitalist goes so far as to say, “If it says ‘FDA approval needed’ in the business plan, I myself scream in fear and run away.”

Dynamic, nimble startups are much more in tune with market conditions than the ever-growing regulatory behemoth that is defined by procedure, conformity, and irresponsibility. As a result, conflict between these two worlds is inevitable:

Most startups can bring a new video game to market in six months. Going through the FDA approval process for medical devices could take three or four years — and cost millions of dollars.

In the tech world, where app updates and software patches are part of every company’s daily routine just to keep up with consumer habits, technology can become outdated in the blink of an eye. Regulatory hold on a product can spell a death sentence for any startup seeking to stay ahead of its fierce market competition.

Akili is the latest victim to get caught in the tendrils of the administrative state, and worst of all, in the FDA, which distinguished political economist Robert Higgs has described as “one of the most powerful of federal regulatory agencies, if not the most powerful.” The agency’s awesome authority extends to over twenty-five percent of all consumer goods in the United States and thus “routinely makes decisions that seal the fates of millions.”

Despite its perceived image as the nation’s benevolent guardian of health and well-being, the FDA’s actual track record is anything but, and its failures have been extensively documented in a vast economic literature.

The “knowledge problem” has foiled the whims of central planners and social engineers in every setting, and the FDA is not immune. By taking a one-sized-fits-all approach in enacting regulatory policy, it fails to take into account the individual preferences, social circumstances, and physiological attributes of the people that compose a diverse society.

For example, people vary widely in their responses to drugs, depending on variables that range from dosage to genetic makeup. In a field as complex as human health, an institution forcing its way on a population is bound to cause problems (for a particularly egregious example, see what happened with the field of nutrition).

The thalidomide tragedy of the 1960s is usually cited as to why we need a centralized, regulatory agency staffed by altruistic public servants to keep the market from being flooded by toxins, snake oils, and other harmful substances. However, this needs to be weighed against the costs of keeping beneficial products withheld.

For example, the FDA’s delay of beta blockers, which were widely available in Europe to reduce heart attacks, was estimated to have cost tens of thousands of lives. Despite this infamous episode and other repeated failures, the agency cannot overcome the institutional incentives it faces as a government bureaucracy. These factors strongly skew its officials towards avoiding risk and getting blamed for visible harm. Here’s how the late Milton Friedman summarized the dilemma with his usual wit and eloquence:

Put yourself in the position of a FDA bureaucrat considering whether to approve a new, proposed drug. There are two kinds of mistakes you can make from the point of view of the public interest. You can make the mistake of approving a drug that turns out to have very harmful side effects. That’s one mistake. That will harm the public. Or you can make the mistake of not approving a drug that would have very beneficial effects. That’s also harmful to the public.

If you’re such a bureaucrat, what’s going to be the effect on you of those two mistakes? If you make a mistake and approve a product that has harmful side effects, you are a devil incarnate. Your misdeed will be spread on the front page of every newspaper. Your name will be mud. You will get the blame. If you fail to approve a drug that might save lives, the people who would object to that are mostly going to be dead. You’re not going to hear from them.

Critics of America’s dysfunctional healthcare system have pointed out the significant role of third-party spending in driving up prices, and how federal and state regulations have created perverse incentives and suppressed the functioning of normal market forces.

In regard to government restrictions on the supply of medical goods, the FDA deserves special blame for driving up the costs of drugsslowing innovation, and denying treatment to the terminally ill while demonstrating no competency in product safety.

Going back to the NPR story, a Pfizer representative was quoted in saying that “game designers should go through the same FDA tests and trials as drug manufacturers.”

Those familiar with the well-known phenomenon of regulatory capture and the basics of public choice theory should not be surprised by this attitude. Existing industries, with their legions of lobbyists, come to dominate the regulatory apparatus and learn to manipulate the system to their advantage, at the expense of new entrants.

Akili and other startups hoping to challenge the status quo would have to run past the gauntlet set up by the “complex leviathan of interdependent cartels” that makes up the American healthcare system. I can only wish them the best, and hope Schumpeterian creative destruction eventually sweeps the whole field of medicine.

Abolishing the FDA and eliminating its too-often abused power to withhold innovative medical treatments from patients and providers would be one step toward genuine healthcare reform.

A version of this post first appeared at The Beacon.

Aaron Tao
Aaron Tao

Aaron Tao is the Marketing Coordinator and Assistant Editor of The Beacon at the Independent Institute. Follow him on Twitter here.