Tag Archive for: Technology

Minnesota Middle School Restricted Cell Phones a Year Ago, the Results Are ‘Just Night and Day’

When I was in middle school, I had a flip phone meant exclusively to contact family members (and maybe a couple close friends). Half the time I didn’t even want to text on it because it was one of those keyboards where you have to press the button two or three times to get the letter you want. I hardly used my phone at all, which I believe attributed to why I enjoyed middle school so much.

Unlike an overwhelming number of kids and teenagers today, I was not glued to my screen. Rather, my time in middle school was rooted in practically anything but the cyber-verse. My friends and I spent every moment before and after school, or in between classes, engaging in quality interactions. We talked during lunch, and it wasn’t about what was trending online. Of course, the older I got, as I went through high school and college, social media grew in prominence. So, really, my time in middle school was the only season I had relatively free of technological domination.

The research and studies conducted on social media use are numerous, and it’s remarkable how the majority of them report negative impacts. The conclusions seem to read the same: “Depression, anxiety, bullying, and anti-social tendencies are on the rise, and it’s all linked to social media usage.” Between October of 2019 and October of 2020, social platforms grew 21.3%, with 93.33% of 4.48 billion (as of 2021) worldwide active on social media.

Although statistics show adults between 27 to 42 are the biggest social media users, I would argue the most unfortunate victims of the social media addiction are the younger generations. Which makes a school such as Maple Grove Middle School in Minnesota a breath of fresh air in a world tainted by online toxins.

About a year ago, Maple Grove chose to restrict cell phone use in the school. While it wasn’t an outright ban, they encouraged students to place their phones in their lockers at the start of the day, and anyone who did not comply and used their phone would get it confiscated until the school day finished. According to the principal, Patrick Smith, there were a variety of contributions to this decision. “[T]here’s a lot of drama that comes from social media, and a lot of conflict that comes from it,” he said.

When Smith and the school staff noticed the kids were hardly interacting with each other throughout the day, they knew a change had to be made.

After a year of restricted screen time, the “kids are happy,” Smith shared. “They’re engaging with each other. … [I]t’s just night and day.” When the plan was first announced, parents applauded, the principal noted. And they continue to give positive feedback, including parents who have shared that their kids are paying more attention and participating in more discussions. One parent said her son is “thriving.”

Meg Kilgannon, Family Research Council’s senior fellow for Education Studies, explained to The Washington Stand her take on the school’s hopeful results. She deemed it as “a great first step in helping teens regulate their use of technology in an unrestricted culture.” But unfortunately, the downsides of social media go beyond depression and anxiety.

New research revealed that 73% of teenagers surveyed have been exposed to pornography, with some as young as 11 when the explicit material was viewed. Experts say social media plays a key role in this as well as the identity crisis sweeping the nation. “We know that the porn industry is relentlessly targeting youth,” Kilgannon added. Additionally, “The work of adolescence is to form one’s identity by discerning God’s call on your life.” So, for Kilgannon, social media being both a source of sexual content and identity confusion means “limiting [its] access to children during the school day is a bare minimum kind of advance that we should all be able to support.”

When it comes to fostering the development of a child, Kilgannon shared, “This work needs to be done in the safety of a loving family and supported by institutions we build as a culture — churches and schools. These are places where we encounter each other and build relationships.” She continued, “This encounter is interrupted by overuse of personal devices like cell phones.”

Going back to my middle school days, I am so thankful for a community that was not overrun by our pocket devices. The friendships felt so genuine, and the days richer. My experience causes me to believe the kids at Maple Grove will be seriously helped by the school’s actions. As Kilgannon concluded, “What a gift to this community for the school to allow their students and faculty the space for genuine human connection. I hope this school is developing a ‘best practice’ guideline to share with others — we need this to ‘go viral!’”

AUTHOR

Sarah Holliday

Sarah Holliday is a reporter at The Washington Stand.

EDITORS NOTE: This Washington Stand column is republished with permission. All rights reserved. ©2023 Family Research Council.


The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.

Everything Solid Melts into Air

Francis X. Maier: The tech revolution has undermined literacy, the supernatural, and sexuality, as it boosted consumer appetites and eroded habits of responsible ownership and mature political participation.


Print literacy and the ownership of property anchor human freedom.  Both can be abused, of course.  Printed lies can kill.  Owning things, and wanting more of them, can easily morph into greed.  But reasonable personal ownership of things like a home, tools, and land tutors us in maturity.  It enhances a person’s agency, and thus his dignity.  It grounds us in reality and gives us a stake in the world, because if we don’t maintain and protect what we have, we lose it, often at great personal cost. The printed word, meanwhile, feeds our interior life and strengthens our ability to reason.

Together they make people much harder to sucker and control than free-floating, human consumer units.  This is why the widespread ownership of property by individuals – or the lack of it – has big cultural and political implications, some of them distinctly unhappy.

I mention this because I’ve made my living with the printed word.  And it occurred to me (belatedly, in 2003) that while I own the ladder in my garage, the hammer and wrench in my storeroom drawer, and even the slab of dead metal hardware and electronics that I work on, I don’t own the software that runs it or enables me to write.  Microsoft or Apple does, depending on the laptop I use. . .and I just didn’t notice it while I was playing all those video games.

What finally grabbed my attention, exactly 20 years ago, was The dotCommunist Manifesto by Columbia University law professor Eben Moglen.  Here’s a slice of the content:

A Spectre is haunting multinational capitalism — the spectre of free information. All the powers of “globalism” have entered into an unholy alliance to exorcize the spectre: Microsoft and Disney, the World Trade Organization, the United States Congress and European Commission.

Where are the advocates of freedom in the new digital society who have not been decried as pirates, anarchists, communists?  Have we not seen that many of those hurling the epithets were merely thieves in power, whose talk of “intellectual property” [rights] was nothing more than an attempt to retain unjustifiable privileges in a society irrevocably changing. . . .

Throughout the world, the movement for free information announces the arrival of a new social structure, born of the transformation of bourgeois industrial society by the digital technology of its own invention. . . .[The] bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society.  Constant revolutionizing of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation, distinguish the bourgeois epoch from all earlier ones. . . .All that is solid melts into air.

And so on.  The rest of it is standard Marxist cant, adapted to the digital age.  But for me it was, and still is, compelling prose.  And this, despite the fact that the original Communist Manifesto led to murderous regimes and mass suffering, and the awkward fact that Prof. Moglen’s dream of abolishing “intellectual property” would wipe out my family’s source of income along with an entire class of more or less independent wordsmiths.

What Moglen did see though, earlier and more clearly than many other critics, was the dark side of the modern digital revolution.  Microsoft, Apple, Google, and similar corporations have created a vast array of marvelous tools for medicine, communications, education, and commerce.

I’m writing these words with one of those tools.  They’ve also sparked a culture-wide upheaval resulting in social fragmentation and bitter antagonisms.  Their ripple effect has undermined the humanities and print literacy, obscured the supernatural, confused our sexuality, hypercharged the porn industry, and fueled consumer appetites while simultaneously eroding habits of responsible ownership and mature political participation.

They promised a new age of individual expression and empowerment.  The reality they delivered, in the words of a constitutional scholar friend, is this:  “Once you go down the path of freedom, you need to restrain its excesses.  And that’s because too much freedom leads to fragmentation, and fragmentation inevitably leads to a centralization of power in the national government.  Which is why today, we the people really aren’t sovereign.  We now live in a sort of technocratic oligarchy, with the congealing of vast wealth in a very small group of people.”

Nothing about today’s tech revolution preaches “restraint.”

I’m a Catholic capitalist.  I’m also, despite the above, a technophile.  America’s economic system was very good to my immigrant grandparents.  It lifted my parents from poverty. It has allowed my family to experience good things unimaginable to my great-grandparents.  But I have no interest in making big corporations – increasingly hostile to Christian beliefs – even more obscenely profitable and powerful.

So, promptly after reading that Eben Moglen text two decades ago, I dumped my Microsoft and Apple operating systems.  I became a Free Software/Open Software zealot.  I even taught myself Linux, a free operating system with free software largely uncontaminated by Big Tech.

And that’s where I met the CLI: the “command line interface.”  Most computers today, even those running Linux, use a pleasing GUI, or graphical user interface.  It’s the attractive, easily accessible desktop that first greets you on the screen.  It’s also a friendly fraud, because the way machines operate and “think” is very, very different from the way humans imagine, feel, and reason.

In 2003, learning Linux typically involved the CLI: a tedious, line-by-line entry of commands to a precise, unforgiving, alien machine logic.  That same logic and its implications, masked by a sunny GUI, now come with every computer on the planet.

I guess I’m saying this:  You get what you pay for. And sometimes it’s more than, and different from, what you thought.  The tech revolution isn’t going away.  It’s just getting started.  And right on time, just as Marx and Moglen said, “all that is solid melts into air.”  Except God.  But of course, we need to think and act like we believe that.

You may also enjoy:

Joseph Cardinal Ratzinger’s God is the genuine reality

David Warren’s Regeneration

AUTHOR

Francis X. Maier

Francis X. Maier is a senior fellow in Catholic studies at the Ethics and Public Policy Center.

Ford Burns Through Billions, Expects to Lose $12 Billion on Electric Vehicle Line

Despite the losses, Ford continues to push forward and hopes to manufacture two million EVs a year by 2026 and hit an 8% profit margin for its EV division. The company is chasing Elon Musk’s Tesla for EV sales in the U.S. and remains far behind the electric car giant. Tesla, which started in 2003, lost money for ten years before finally turning a profit in 2013. Musk’s company made $12.6 billion in 2022, an impressive jump from $5.5 billion in 2021.

They don’t care. As long as the Democrats are running/ruining the economy with their environmental/climate garbage, it’s the American  taxpayer that will have to pay for this mess.

Ford Says It Will Lose $3 Billion on EVs This Year as It Touts Startup Mentality

Ford Motor Co. expects to lose about $3 billion on its electric-vehicle business this year, a reminder of how far traditional auto makers have to go in turning their EV portfolios profitable.

Ford disclosed the figure Thursday while outlining a new financial-reporting structure intended to give investors better insight into the performance of its three business units. Ford finance chief John Lawler described the EV division as a startup inside the 119-year-old company, and said it is normal for a fledgling business to rack up losses.

Ford shares were down about 1.3% in afternoon trading Thursday…

Read more.

Ford Projects Its EV Division Will Lose Billions This Year

The Ford Motor Company is going full throttle toward electric vehicle manufacturing, but that decision will cost the Michigan-based carmaker billions this year alone.

Ford said Thursday that it expects its EV division will lose $3 billion in 2023 as it pushes to produce more vehicles and build electric battery plants in Kentucky, Tennessee, and Michigan, The Financial Times reported. The carmaker wasn’t surprised by the massive loss of money as it views its EV division, known as Model e, as a “start-up.”

“Ford Model e is an EV start-up within Ford and, as everyone knows, EV start-ups lose money while they invest in capability, develop knowledge, build volume and gain share,” said John Lawler, Ford’s chief financial officer.

Despite the losses, Ford continues to push forward and hopes to manufacture two million EVs a year by 2026 and hit an 8% profit margin for its EV division. The company is chasing Elon Musk’s Tesla for EV sales in the U.S. and remains far behind the electric car giant. Tesla, which started in 2003, lost money for ten years before finally turning a profit in 2013. Musk’s company made $12.6 billion in 2022, an impressive jump from $5.5 billion in 2021.

Ford plans to explain its financials in more detail to investors and how it will stick to its goal of selling only zero-carbon emission vehicles by 2040, according to The Financial Times. Ford is relying on Ford Blue, its gas-powered vehicle production, to fund the carmaker’s transition to EVs.

Ford Blue is expected to rake in $7 billion this year, and the company’s commercial vehicles division, Ford Pro, is expected to double last year’s earnings to $6 billion this year. Lawler blamed spending on new battery plants and battery technology for the carmaker’s EV losses.

Last month, the carmaker was criticized for collaborating with a Chinese company to build a battery plant in Michigan. In its proposal, Ford said it would partner with the Chinese company Contemporary Amperex Technology on the plant that would employ 2,500 people when it begins production in 2026.

Virginia Republican Gov. Glenn Youngkin withdrew his state from consideration for the new battery plant because of Ford’s partnership with the Chinese company. Michigan Democratic Gov. Gretchen Whitmer, however, has pushed for the battery plant to come to the Great Lakes State and celebrated Ford’s decision to build the plant in Michigan.

AUTHOR

RELATED ARTICLES:

Mass Hysteria Driving the EV Phenomenon

Why Are There No EV Charging Stations at Interstate Rest Stops? Blame the Feds!

Democrats Passed $7,500 Electric Vehicle Tax Credit, Then EV Prices Were Immediately Raised $7,500

EDITORS NOTE: This Geller Report is republished with permission. ©All rights reserved.

‘I thought crypto exchanges were safe’: The lesson in FTX’s collapse

The safest way to store cryptocurrency is in your own crypto wallet.


Anthony* (a friend) called a few weeks ago, deeply worried.

A deputy principal of a high school in Queensland, over the past year he spent hundreds of thousands of dollars buying cryptocurrencies, borrowing money using his home as equity.

But now all his assets, valued at A$600,000, were stuck in an account he couldn’t access.

He’d bought through FTX, the world’s third-biggest cryptocurrency exchange, endorsed by celebrities such as Seinfeld co-creator Larry David, basketball champions Steph Curry and Shaquille O’Neal, and tennis ace Naomi Osaka.

With FTX’s spectacular collapse, he’s now awaiting the outcome of the liquidation process that is likely to see him, 30,000 other Australians and more than 1.2 million customers worldwide lose everything.

“I thought these exchanges were safe,” Anthony said.

He was wrong.

Not like stock exchanges

Cryptocurrency exchanges are sometimes described as being like stock exchanges. But they are very different to the likes of the London or New York stock exchanges, institutions that have weathered multiple financial crises.

Stock exchanges are both highly regulated and help regulate share trading. Cryptocurrency exchanges, on the other hand, are virtually unregulated and serve no regulatory function.

They’re just private businesses that make money by helping “mum and dad” investors to get into crypto trading, profiting from the commission charged on each transaction.

Indeed, the crypto exchanges that have grown to dominate the market — such as Binance, Coinbase and FTX — arguably undermine the whole vision that drove the creation of Bitcoin and blockchains — because they centralise control in a system meant to decentralise and liberate finance from the power of governments, banks and other intermediaries.

These centralised exchanges are not needed to trade cryptocurrency, and are pretty much the least safe way to buy and hold crypto assets.

Trading before exchanges

In the early days of Bitcoin (all the way back in 2008) the only way to acquire it was to “mine” it — earning new coins by performing the complex computations required to verify and record transactions on a digital ledger (called a blockchain).

The coins would be stored in a digital “wallet”, an application similar to a private bank account, accessible only by a password or “private key”.

A wallet can be virtual or physical, on a small portable device similar in appearance to a USB stick or small phone. Physical wallets are the safest because they can be unplugged from the internet when not being used, minimising the risk of being hacked.

Before exchanges emerged, trading involved owners selling directly to buyers via online forums, transferring coins from one wallet to another like any electronic funds transfer.

Decentralised vs centralised

All this, however, required some technical knowledge.

Cryptocurrency exchanges reduced the need for such knowledge. They made it easy for less tech-savvy investors to get into the market, in the same way web browsers have made it easy to navigate the Internet.

Two types of exchanges emerged: decentralised (DEX) and centralised (CEX).

Decentralised exchanges are essentially online platforms to connect the orders of buyers and sellers of cryptocurrencies. They are just there to facilitate trading. You still need to hold cryptocurrencies in your own wallet (known as “self-custody”).

Centralised exchanges go much further, eliminating wallets by offering a one-stop-shop service. They aren’t just an intermediary between buyers and sellers. Rather than self-custody, they act as custodian, holding cryptocurrency on customers’ behalf.

Exchange, broker, bank

Centralised exchanges have proven most popular. Seven of the world’s ten biggest crypto exchanges by trading volume are centralised.

But what customers gain in simplicity, they lose in control.

You don’t give your money to a stock exchange, for example. You trade through a broker, who uses your trading account when you buy and deposits money back into your account when you sell.

A CEX, on the other hand, acts as an exchange, a brokerage (taking customers’ fiat money and converting it into crypto or vice versa), and as a bank (holding customer’s crypto assets as custodian).

This is why FTX was holding cash and crypto assets worth US$10-50 billion. It also acted like a bank by borrowing and lending cryptocurrencies — though without customers’ knowledge or agreement, and without any of the regulatory accountability imposed on banks.

Holding both wallets and keys, founder-owner Sam Bankman-Fried “borrowed” his customers’ funds to prop up his other businesses. Customers realised too late they had little control. When it ran into trouble, FTX simply stopped letting customers withdraw their assets.

The power of marketing

Like stockbrokers, crypto exchanges make their money by charging a commission on every trade. They are therefore motivated to increase trading volumes.

FTX did this most through celebrity and sports marketing. Since it was founded in 2019 it has spent an estimated US$375 million on advertising and endorsements, including buying the naming rights to the stadium used by the Miami Heat basketball team.

Such marketing has helped to create the illusion that FTX and other exchanges were as safe as mainstream institutions. Without such marketing, it’s debatable the value of the cryptocurrency market would have risen from US$10 billion in 2014 to US$876 billion in 2022.

Not your key, not your coins

There’s an adage among crypto investors: “Not your key, not your coins, it’s that simple.”

What this means is that your crypto isn’t safe unless you have self-custody, storing your own coins in your own wallet to which you alone control the private key.

The bottom line: crypto exchanges are not like stock exchanges, and CEXs are not safe. If the worst eventuates, whether it be an exchange collapse or cyber attack, you risk losing everything.

All investments carry risks, and the unregulated crypto market carries more risk than most. So follow three golden rules.

First, do some homework. Understand the process of trading crypto. Learn how to use a self-custody wallet. Until governments regulate crypto markets, especially exchanges, you’re largely on your own.

Second, if you’re going to use an exchange, a DEX is more secure. There is no evidence to date that any DEX has been hacked.

Lastly, in this world of volatility, only risk what you can afford to lose.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AUTHORS

Paul Mazzola

Paul Mazzola is a Lecturer in Banking and Finance, Faculty of Business and Law, University of Wollongong More by Paul Mazzola

Mitchell Goroch

Mitchell Goroch is a Cryptocurrency Trader and Researcher at the Centre for Responsible Organisations & Practices, University of Wollongong. More by Mitchell Goroch

RELATED ARTICLE: Democrat Mega Donor Ponzi-Crook Sam Bankman-Fried Arrested Before He Could Testify before Congress

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.

The case for nuclear power

Despite its lethal past, nuclear energy is the clean and cost-effective power source we need.


In the fall 2022 issue of the technology-and-society journal The New Atlantis, authors Thomas and Nate Hochman examine the pros and cons of building new nuclear power plants in the United States.  The case of nuclear power is fraught with political issues that are inextricably tied up with technical issues, but the Hochmans do a good job of laying out the problems facing nuclear power and some possible solutions.

If nuclear power had not been invented until 2010, say, it would probably be welcomed as the keystone in our society’s answer to climate change.  Imagine a source of the most fungible type of energy — electricity — that takes teaspoons of nuclear fuel compared to carloads or pipelines full of fossil fuels, emits zero greenhouse gases, and when properly engineered runs more reliably than wind, solar, hydro, or sometimes even natural gas, as the misadventure of Texas’s Great Freeze of February 2021 showed.  What’s to oppose?  Well, a lot, as the Hochmans admit.

Deadly history

It is perhaps unfortunate that the first major use of nuclear technology was in the closing days of World War II, when the US became the only nation so far to employ nuclear weapons in wartime, killing hundreds of thousands of Japanese with bombs dropped on Hiroshima and Nagasaki.  The long shadow of nuclear war has cast a darkness over the technology of nuclear power ever since, despite optimistic but misguided attempts to promote peaceful uses in the 1950s.

The Hochmans describe the golden era of US nuclear power plant construction, which ran roughly from 1967 to 1987, as a period in which the two major US manufacturers — General Electric and Westinghouse — offered “turn-key” plants that were priced competitively with coal-fired units.  The utilities snapped them up, and the vast majority of existing plants were built in those two decades.

The turn-key pricing turned out to be a big mistake, however.  Manufacturers expected the cost per plant to decline as economies of scale kicked in, but for a variety of reasons both technical and regulatory, the hoped-for economies never materialised.  The particular pressurised-water technology that was used was adapted from early nuclear submarines, and in retrospect may not have been the best choice for domestic power plants.  By the time the companies realised their mistake and switched to cost-plus contracts, they had lost a billion dollars, and utilities became much less enthusiastic when they had to pay the true costs of building the plants.

In the meantime, the National Environmental Policy Act (NEPA) was passed in 1970, making it much harder to obtain permits to build complicated things like nuclear plants.  In the pre-Act days, permitting a plant sometimes took less than a year, but once NEPA passed, such speediness (and the resulting economies of fast construction) was a thing of the past.

Then came the Three-Mile Island nuclear accident in 1979 and the Chernobyl plant fire and disaster in 1986, further blackening the reputation of nuclear power in the public mind.  Add to that the not-in-my-back-yard problems faced by attempts to find permanent storage locations for nuclear waste, and by 1990 the US nuclear industry was in a kind of coma from which it has not yet recovered.

The Hochmans point to France as a counterexample of a nation that made a conscious decision to go primarily nuclear for its electric power, and even today about 70% of France’s power is nuclear.  But even France is having problems maintaining their aging plants, and French nuclear promoters face the same sorts of political headwinds that prevail in the US.

Viable option

Now that climate change is an urgent priority for millions of people and dozens of governments, the strictly technical appeal of nuclear power is still valid. It really does make zero greenhouse gases in operation, and when properly engineered, it can be the most reliable form of power, providing the essential base-load capacity that is needed to stabilise grids that will draw an increasing amount of energy from highly intermittent solar and wind sources in the future. Eventually, energy-storage technology may make it possible to store enough energy to smooth out the fluctuations of renewables, but we simply don’t have that now, and it may not come for years or decades.

In the meantime, there are plans on drawing boards for so-called “modular” plants.  If every single automobile was a custom design from the ground up, including a from-scratch engine and body, only the likes of Elon Musk could afford to drive.  But that was how nuclear plants were made back in the day:  each design was customised to the particular site and customer specifications.

If manufacturers had the prospects of sales and freedom to develop a modular one-size-fits-all design, they could turn the process into something similar to the way mobile homes are made today:  in factories, and then shipped out in pieces to be simply assembled on site.  And newer designs favouring gravity feeds over powered pumps can be made much safer so that if anything goes wrong, the operators simply walk away and the plant safely shuts itself down.

Standing in the way of these innovations are (1) the prevailing negative political winds against nuclear power, enforced with more emotion than logic by environmental groups and major political parties, and (2) the need to change regulations to allow such technical innovations, which currently are all but blocked by existing laws and rules.

In the Hochmans’ best-case scenario, the US begins importing modular plants from countries where an existing base of nuclear know-how allows efficient manufacturing, which these days means places like China.  Even if the US nuclear industry turned on full-speed today, it would take a decade or more to recover the expertise base that was lost a generation ago when the industry collapsed.  Regulations and regulatory agencies would change from merely obstructing progress to reasoned cooperation with nuclear-plant manufacturing and installation.  And we would derive an increasing proportion of our energy from a source that has always made a lot of technical sense.

On the other hand, things may just go on as they are now, with old plants closing and no new ones to take their place. That would be bad for a number of reasons, but reason hasn’t been the only consideration in the history of nuclear energy up to now.

This article has been republished from the author’s blog, Engineering Ethics, with permission.

AUTHOR

Karl D. Stephan

Karl D. Stephan received the B. S. in Engineering from the California Institute of Technology in 1976. Following a year of graduate study at Cornell, he received the Master of Engineering degree in 1977… More by Karl D. Stephan

RELATED ARTICLE: In Europe, the nuclear “comeback”

RELATED TWEET:

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.

How Disruptive Innovation Is Accelerating the Growth of Alternative Learning Models

Disruptive innovation is reshaping how children learn and expanding access to alternative education models.


Disruptive innovation usually begins on the margins, with a few, intrepid users embracing a new product or service. Abetted by new technologies, a disruptive innovation penetrates the mainstream when its quality is proven to be as good, if not better, than more established models.

According to author and investor, Michael Horn, a classic example of disruptive innovation is Airbnb, which began on the margins as a couch-surfing tool and then, enabled by technology, upended the hospitality industry.

“Initially we thought [disruptive innovation] could be any low-cost innovation,” Horn told me on this week’s episode of the LiberatED podcast. “What we observed over time was that you needed some sort of technology enabler that allowed you to carry the original value proposition around convenience, affordability, and accessibility and allowed you to improve without just replicating all of the cost features of the incumbent.”

Horn should know. He co-founded the Clayton Christensen Institute with Clayton Christensen, who coined the term “disruptive innovation” back in the 1990s. Since then, Horn has studied the role of disruptive innovation in education and has written several books on the topic, including his newly-released book, From Reopen to Reinvent: (Re)Creating School for Every Child.

In our podcast conversation this week, Horn and I focused on the ways in which disruptive innovation is reshaping how many children learn, as well as accelerating the growth of alternative learning models.

For instance, while homeschooling began its modern revival a half-century ago, and microschools, or small, multi-age learning environments, have existed for decades—including some of the ones I highlighted in my Unschooled book—it wasn’t until the advent of new technologies that homeschooling and microschooling became a mainstream option for millions of families.

Virtual schools and platforms such as Sora SchoolsMy Tech HighASU Prep Digital, and Socratic Experience, enable students, many of whom may be registered as homeschoolers, to learn from anywhere and have access to a more personalized curriculum. Similarly, Khan Academy, Coursera, Udemy, and Outschool give students around the world access to content and curriculum experts to make it easier to choose an alternative learning path, or supplement a conventional one.

Fast-growing microschool networks such as Prenda and KaiPod are combining educational technology with small, in-person learning pods to enable many more families to have access to a personalized, flexible microschool experience. KaiPod has recently teamed up with virtual providers such as Sora Schools and Socratic Experience to offer pods tailored to families choosing a specific curriculum.

“Leveraging technology allows you to stay connected to the curriculum, learn from anywhere, learn from the best experts anywhere,” said Horn. “And then surround the child with a variety of novel supports that are customized to what that child needs, what the family needs, and unleashes all sorts of things.”

Blending new technologies with the personalization and flexibility of microschooling and homeschooling will continue to disrupt the education sector and turn alternative learning models into mainstream options for many more families.

AUTHOR

Kerry McDonald

Kerry McDonald is a Senior Education Fellow at FEE and host of the weekly LiberatED podcast. She is also the author of Unschooled: Raising Curious, Well-Educated Children Outside the Conventional Classroom (Chicago Review Press, 2019), an adjunct scholar at the Cato Institute, and a regular Forbes contributor. Kerry has a B.A. in economics from Bowdoin College and an M.Ed. in education policy from Harvard University. She lives in Cambridge, Massachusetts with her husband and four children. You can sign up for her weekly newsletter on parenting and education here.


​​Listen to the weekly LiberatED Podcast on AppleSpotifyGoogle, and Stitcher, or watch it on YouTube, and sign up for Kerry’s weekly LiberatED email newsletter to stay up-to-date on educational news and trends from a free-market perspective.


EDITORS NOTE: This FEE column is republished with permission. ©All rights reserved.

Why We Need to Make Mistakes: Innovation Is Better than Efficiency by Sandy Ikeda

“I think it is only because capitalism has proved so enormously more efficient than alternative methods that is has survived at all,” Milton Friedman told economist Randall E. Parker for Parker’s 2002 book, Reflections on the Great Depression.

But I think innovation, not efficiency, is capitalism’s greatest strength. I’m not saying that the free market can’t be both efficient and innovative, but it does offer people a strong incentive to abandon the pursuit of efficiency in favor of innovation.

What Is Efficiency?

In its simplest form, economic efficiency is about given ends and given means. Economic efficiency requires that you know what end, among all possible ends, is the most worthwhile for you to pursue and what means to use, among all available means, to attain that end. You’re being efficient when you’re getting the highest possible benefit from an activity at the lowest possible cost. That’s a pretty heavy requirement.

Being inefficient, then, implies that for a given end, the benefit you get from that end is less than the cost of the means you use to achieve it. Or, as my great professor, Israel Kirzner, puts it, If you want to go uptown, don’t take the downtown train.

What Is Innovation?

Innovation means doing something significantly novel. It could be doing an existing process in a brand new way, such as being the first to use a GPS tracking system in your fleet of taxis. Or, innovation could mean doing something that no one has ever done before, such as using smartphone technology to match car owners with spare time to carless people who need to get somewhere in a hurry, à la Uber.

Innovation, unlike efficiency, entails discovering novel means to achieve a given end, or discovering an entirely new end. And unlike efficiency, in which you already know about all possible ends and means, innovation takes place onlywhen you lack knowledge of all means, all ends, or both.

Sometimes we mistakenly say someone is efficient when she discovers a new way to get from home to work. But that’s not efficiency; that’s innovation. And a person who copies her in order to reduce his commute time is not an innovator — but he is being efficient. The difference hinges on whether you’re creating new knowledge.

Where’s the Conflict?

Starting a business that hasn’t been tried before involves a lot of trial and error. Most of the time the trials, no matter how well thought out, turn out to contain errors. The errors may lie in the means you use or in the particular end you’re pursuing.

In most cases, it takes quite a few trials and many, many errors before you hit on an outcome that has a high enough value and low enough costs to make the enterprise profitable.) Is that process of trial and error, of experimentation, an example of economic efficiency? It is not.

If you begin with an accurate idea both of the value of an end and of all the possible ways of achieving that end, then you don’t need to experiment. Spending resources on trial and error would be wasteful. It’s then a matter of execution, which isn’t easy, but the real heavy lifting in the market process, both from the suppliers’ and the consumers’ sides, is done by trying out new things — and often failing.

Experimentation is messy and apparently wasteful, whether in science or in business. You do it precisely because you’re not sure how to answer a particular question, or because you’re not even sure what the right question is. There are so many failures. But in a world where our knowledge is imperfect, which is the world we actually live in, most of what we have to do in everyday life is to innovate — to discover things we didn’t know we didn’t know — rather than trying to be efficient. Being willing to suffer failure is the only way to make discoveries and to introduce innovations into the world.

Strictly speaking, then, if you want to innovate, being messy is unavoidable, and messiness is not efficient. Yet, if you want to increase efficiency, you can’t be messy. Innovation and efficiency usually trade off for each other because if you’re focused on doing the same thing better and better, you’re taking time and energy away from trying to do something new.

Dynamic Efficiency?

Some have tried to describe this process of innovation as “dynamic efficiency.” It may be quibbling over words, but I think trying to salvage the concept of efficiency in this way confuses more than it clarifies. To combine efficiency and innovation is to misunderstand the essential meanings of those words.

What would it mean to innovate efficiently? I suppose it would mean something like “innovating at least cost.” But how is it possible to know, before you’ve actually created a successful innovation, whether you’ve done it at least cost? You might look back and say, “Gee, I wouldn’t have run experiments A, B, and C if only I’d known that D would give me the answer!” But the only way to know that D is the right answer is to first discover, through experimentation and failure, that A, B, and C are the wrong answers.

Both efficiency and innovation best take place in a free market. But the greatest rewards to buyers and sellers come not from efficiency, but from innovation.

Sandy IkedaSandy Ikeda

Sandy Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.

Networks Topple Scientific Dogma by Max Borders

Science is undergoing a wrenching evolutionary change.

In fact, most of what we consider to be carried out in the name of science is dubious at best, flat wrong at worst. It appears we’re putting too much faith in science — particularly the kind of science that relies on reproducibility.

In a University of Virginia meta-study, half of 100 psychology study results could not be reproduced.

Experts making social science prognostications turned out to be mostly wrong, according to political science writer Philip Tetlock’s decades-long review of expert forecasts.

But there is perhaps no more egregious example of bad expert advice than in the area of health and nutrition. As I wrote last year for Voice & Exit:

For most of our lives, we’ve been taught some variation on the food pyramid. The advice? Eat mostly breads and cereals, then fruits and vegetables, and very little fat and protein. Do so and you’ll be thinner and healthier. Animal fat and butter were considered unhealthy. Certain carbohydrate-rich foods were good for you as long as they were whole grain. Most of us anchored our understanding about food to that idea.

“Measures used to lower the plasma lipids in patients with hyperlipidemia will lead to reductions in new events of coronary heart disease,” said the National Institutes of Health (NIH) in 1971. (“How Networks Bring Down Experts (The Paleo Example),” March 12, 2015)

The so-called “lipid theory” had the support of the US surgeon general. Doctors everywhere fell in line behind the advice. Saturated fats like butter and bacon became public enemy number one. People flocked to the supermarket to buy up “heart healthy” margarines. And yet, Americans were getting fatter.

But early in the 21st century something interesting happened: people began to go against the grain (no pun) and they started talking about their small experiments eating saturated fat. By 2010, the lipid hypothesis — not to mention the USDA food pyramid — was dead. Forty years of nutrition orthodoxy had been upended. Now the experts are joining the chorus from the rear.

The Problem Goes Deeper

But the problem doesn’t just affect the soft sciences, according to science writer Ron Bailey:

The Stanford statistician John Ioannidis sounded the alarm about our science crisis 10 years ago. “Most published research findings are false,” Ioannidis boldly declared in a seminal 2005 PLOS Medicine article. What’s worse, he found that in most fields of research, including biomedicine, genetics, and epidemiology, the research community has been terrible at weeding out the shoddy work largely due to perfunctory peer review and a paucity of attempts at experimental replication.

Richard Horton of the Lancet writes, “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” And according Julia Belluz and Steven Hoffman, writing in Vox,

Another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)

Contrast the progress of science in these areas with that of applied sciences such as computer science and engineering, where more market feedback mechanisms are in place. It’s the difference between Moore’s Law and Murphy’s Law.

So what’s happening?

Science’s Evolution

Three major catalysts are responsible for the current upheaval in the sciences. First, a few intrepid experts have started looking around to see whether studies in their respective fields are holding up. Second, competition among scientists to grab headlines is becoming more intense. Third, informal networks of checkers — “amateurs” — have started questioning expert opinion and talking to each other. And the real action is in this third catalyst, creating as it does a kind of evolutionary fitness landscape for scientific claims.

In other words, for the first time, the cost of checking science is going down as the price of being wrong is going up.

Now, let’s be clear. Experts don’t like having their expertise checked and rechecked, because their dogmas get called into question. When dogmas are challenged, fame, funding, and cushy jobs are at stake. Most will fight tooth and nail to stay on the gravy train, which can translate into coming under the sway of certain biases. It could mean they’re more likely to cherry-pick their data, exaggerate their results, or ignore counterexamples. Far more rarely, it can mean they’re motivated to engage in outright fraud.

Method and Madness

Not all of the fault for scientific error lies with scientists, per se. Some of it lies with methodologies and assumptions most of us have taken for granted for years. Social and research scientists have far too much faith in data aggregation, a process that can drop the important circumstances of time and place. Many researchers make inappropriate inferences and predictions based on a narrow band of observed data points that are plucked from wider phenomena in a complex system. And, of course, scientists are notoriously good at getting statistics to paint a picture that looks like their pet theories.

Some sciences even have their own holy scriptures, like psychology’s Diagnostic and Statistical Manual. These guidelines, when married with government funding, lobbyist influence, or insurance payouts, can protect incomes but corrupt practice.

But perhaps the most significant methodological problem with science is over-reliance on the peer-review process. Peer review can perpetuate groupthink, the cartelization of knowledge, and the compounding of biases.

The Problem with Expert Opinion

The problem with expert opinion is that it is often cloistered and restrictive. When science starts to seem like a walled system built around a small group of elites (many of whom are only sharing ideas with each other) — hubris can take hold. No amount of training or smarts can keep up with an expansive network of people who have a bigger stake in finding the truth than in shoring up the walls of a guild or cartel.

It’s true that to some degree, we have to rely on experts and scientists. It’s a perfectly natural part of specialization and division of labor that some people will know more about some things than you, and that you are likely to need their help at some point. (I try to stay away from accounting, and I am probably not very good at brain surgery, either.) But that doesn’t mean that we shouldn’t question authority, even when the authority knows more about their field than we do.

The Power of Networks

But when you get an army of networked people — sometimes amateurs — thinking, talking, tinkering, and toying with ideas — you can hasten a proverbial paradigm shift. And this is exactly what we are seeing.

It’s becoming harder for experts to count on the vagaries and denseness of their disciplines to keep their power. But it’s in cross-disciplinary pollination of the network that so many different good ideas can sprout and be tested.

The best thing that can happen to science is that it opens itself up to everyone, even people who are not credentialed experts. Then, let the checkers start to talk to each other. Leaders, influencers, and force-multipliers will emerge. You might think of them as communications hubs or bigger nodes in a network. Some will be cranks and hacks. But the best will emerge, and the cranks will be worked out of the system in time.

The network might include a million amateurs willing to give a pair of eyes or a different perspective. Most in this army of experimenters get results and share their experiences with others in the network. What follows is a wisdom-of-crowds phenomenon. Millions of people not only share results, but challenge the orthodoxy.

How Networks Contribute to the Republic of Science

In his legendary 1962 essay, “The Republic of Science,” scientist and philosopher Michael Polanyi wrote the following passage. It beautifully illustrates the problems of science and of society, and it explains how they will be solved in the peer-to-peer age:

Imagine that we are given the pieces of a very large jigsaw puzzle, and suppose that for some reason it is important that our giant puzzle be put together in the shortest possible time. We would naturally try to speed this up by engaging a number of helpers; the question is in what manner these could be best employed.

Polanyi says you could progress through multiple parallel-but-individual processes. But the way to cooperate more effectively

is to let them work on putting the puzzle together in sight of the others so that every time a piece of it is fitted in by one helper, all the others will immediately watch out for the next step that becomes possible in consequence. Under this system, each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated. We have here in a nutshell the way in which a series of independent initiatives are organized to a joint achievement by mutually adjusting themselves at every successive stage to the situation created by all the others who are acting likewise.

Just imagine if Polanyi had lived to see the Internet.

This is the Republic of Science. This is how smart people with different interests and skill sets can help put together life’s great puzzles.

In the Republic of Science, there is certainly room for experts. But they are hubs among nodes. And in this network, leadership is earned not by sitting atop an institutional hierarchy with the plumage of a postdoc, but by contributing, experimenting, communicating, and learning with the rest of a larger hive mind. This is science in the peer-to-peer age.

Max BordersMax Borders

Max Borders is Director of Idea Accounts and Creative Development for Emergent Order. He was previously the editor of the Freeman and director of content for FEE. He is also co-founder of the event experience Voice & Exit.

The Average American Today Is Richer than John D. Rockefeller by Donald J. Boudreaux

This Atlantic story reveals how Americans lived 100 years ago. By the standards of a middle-class American today, that lifestyle was poor, inconvenient, dreary, and dangerous. (Only a few years later — in 1924 — the 16-year-old son of a sitting US president would die of an infected blister that the boy got on his toe while playing tennis on the White House grounds.)

So here’s a question that I’ve asked in one form or another on earlier occasions, but that is so probing that I ask it again: What is the minimum amount of money that you would demand in exchange for your going back to live even as John D. Rockefeller lived in 1916?

21.7 million 2016 dollars (which are about one million 1916 dollars)? Would that do it? What about a billion 2016 — or 1916 — dollars? Would this sizable sum of dollars be enough to enable you to purchase a quantity of high-quality 1916 goods and services that would at least make you indifferent between living in 1916 America and living (on your current income) in 2016 America?

Think about it. Hard. Carefully.

If you were a 1916 American billionaire you could, of course, afford prime real-estate. You could afford a home on 5th Avenue or one overlooking the Pacific Ocean or one on your own tropical island somewhere (or all three). But when you traveled from your Manhattan digs to your west-coast palace, it would take a few days, and if you made that trip during the summer months, you’d likely not have air-conditioning in your private railroad car.

And while you might have air-conditioning in your New York home, many of the friends’ homes that you visit — as well as restaurants and business offices that you frequent — were not air-conditioned. In the winter, many were also poorly heated by today’s standards.

To travel to Europe took you several days. To get to foreign lands beyond Europe took you even longer.

Might you want to deliver a package or letter overnight from New York City to someone in Los Angeles? Sorry. Impossible.

You could neither listen to radio (the first commercial radio broadcast occurred in 1920) nor watch television. You could, however, afford the state-of-the-art phonograph of the era. (It wasn’t stereo, though. And — I feel certain — even today’s vinylphiles would prefer listening to music played off of a modern compact disc to listening to music played off of a 1916 phonograph record.) Obviously, you could not download music.

There really wasn’t very much in the way of movies for you to watch, even though you could afford to build your own home movie theater.

Your telephone was attached to a wall. You could not use it to Skype.

Your luxury limo was far more likely to break down while you were being chauffeured about town than is your car today to break down while you are driving yourself to your yoga class. While broken down and waiting patiently in the back seat for your chauffeur to finish fixing your limo, you could not telephone anyone to inform that person that you’ll be late for your meeting.

Even when in residence at your Manhattan home, if you had a hankering for some Thai red curry or Vindaloo chicken or Vietnamese Pho or a falafel, you were out of luck: even in the unlikely event that you even knew of such exquisite dishes, your chef likely had no idea how to prepare them, and New York’s restaurant scene had yet to feature such exotic fare. And while you might have had the money in 1916 to afford to supply yourself with a daily bowlful of blueberries at your New York home in January, even for mighty-rich you the expense was likely not worthwhile.

Your wi-fi connection was painfully slow — oh, wait, right: it didn’t exist. No matter, because you had neither computer nor access to the Internet. (My gosh, there weren’t even any blogs for you to read!)

Even the best medical care back then was horrid by today’s standards: it was much more painful and much less effective. (Remember young Coolidge.) Antibiotics weren’t available. Erectile dysfunction? Bipolar disorder? Live with ailments such as these. That was your only option.

You (if you are a woman) or (if you are a man) your wife and, in either case, your daughter and your sister had a much higher chance of dying as a result of giving birth than is the case today. The child herself or himself was much less likely to survive infancy than is the typical American newborn today.

Dental care wasn’t any better. Your money didn’t buy you a toothbrush with vibrating bristles. (You could, however, afford the very finest dentures.)

Despite your vanity, you couldn’t have purchased contact lenses, reliable hair restoration, or modern, safe breast augmentation. And forget about liposuction to vacuum away the results of your having dined on far too many cream-sauce-covered terrapin.

Birth control was primitive: it was less reliable and far more disruptive of pleasure than are any of the many inexpensive and widely available birth-control methods of today.

Of course, you adore precious-weacious little Rover, but your riches probably could not buy for Rover veterinary care of the sort that is routine in every burgh throughout the land today.

You were completely cut off from the cultural richness that globalization has spawned over the past century. There was no American-inspired, British-generated rock’n’roll played on electric guitars. And no reggae. Jazz was still a toddler, with only few recordings of it.

You could afford to buy the finest Swiss watches and clocks, but even they couldn’t keep time as accurately as does a cheap Timex today (not to mention the accuracy of the time kept by your smartphone).

Honestly, I wouldn’t be remotely tempted to quit the 2016 me so that I could be a one-billion-dollar-richer me in 1916. This fact means that, by 1916 standards, I am today more than a billionaire. It means, at least given my preferences, I am today materially richer than was John D. Rockefeller in 1916. And if, as I think is true, my preferences here are not unusual, then nearly every middle-class American today is richer than was America’s richest man a mere 100 years ago.

This post first appeared at Cafe Hayek.

Donald J. BoudreauxDonald J. Boudreaux

Donald Boudreaux is asenior fellow with the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center at George Mason University, a Mercatus Center Board Member, a professor of economics and former economics-department chair at George Mason University and, a former FEE president.

Zika Virus Shows It’s Time to Bring Back DDT by Diana Furchtgott-Roth

The Zika virus is spreading by mosquitoes northward through Latin America, possibly correlated with birth defects such as microcephaly in infants. Stories and photos of their abnormally small skulls are making headlines. The World Health Organization reports that four million people could be infected by the end of 2016.

On Monday, the WHO is meeting to decide how to address the crisis. The international body should recommend that the ban on DDT should be reversed, in order to kill the mosquitoes that carry Zika and malaria, a protistan parasite that has no cure.

Zika is in the news, but it is dwarfed by malaria. About 300 million to 600 million people suffer each year from malaria, and it kills about 1 million annually, 90 percent in sub-Saharan Africa. We have the means to reduce Zika and malaria — and we are not using it.

Under the Global Malaria Eradication Program, which started in 1955, DDT was used to kill the mosquitoes that carried the parasite, and malaria was practically eliminated. Some countries such as Sri Lanka, which started using DDT in the late 1940s, saw profound improvements. Reported cases fell from nearly 3 million a year to just 17 cases in 1963. In Venezuela, cases fell from over 8 million in 1943 to 800 in 1958. India saw a dramatic drop from 75 million cases a year to 75,000 in 1961.

This changed with the publication of Rachel Carson’s 1962 book, Silent Spring, which claimed that DDT was hazardous. After lengthy hearings between August 1971 and March 1972, Judge Edmund Sweeney, the EPA hearing examiner, decided that there was insufficient evidence to ban DDT and that its benefits outweighed any adverse effects. Yet, two months afterwards, then-EPA Administrator William D. Ruckelshaus overruled him and banned DDT, effective December 31, 1972.

Other countries followed, and DDT was banned in 2001 for agriculture by the Stockholm Convention on Persistent Organic Pollutants. This was a big win for the mosquitoes, but a big loss for people who lived in Latin America, Asia, and Africa.

Carson claimed that DDT, because it is fat soluble, accumulated in the fatty tissues of animals and humans as the compound moved through the food chain, causing cancer and other genetic damage. Carson’s concerns and the EPA action halted the program in its tracks, and malaria deaths started to rise again, reaching 600,000 in 1970, 900,000 in 1990 and over 1,000,000 in 1997 — back to pre-DDT levels.

Some continue to say that DDT is harmful, but others say that DDT was banned in vain. There remains no compelling evidence that the chemical has produced any ill public health effects. According to an article in the British medical journal the Lancet by Professor A.G. Smith of Leicester University,

The early toxicological information on DDT was reassuring; it seemed that acute risks to health were small. If the huge amounts of DDT used are taken into account, the safety record for human beings is extremely good. In the 1940s many people were deliberately exposed to high concentrations of DDT thorough dusting programmes or impregnation of clothes, without any apparent ill effect… In summary, DDT can cause toxicological effects but the effects on human beings at likely exposure are very slight.

Even though nothing is as cheap and effective as DDT, it is not a cure-all for malaria. But a study by the Uniformed Services University of the Health Sciences concluded that spraying huts in Africa with DDT reduces the number of mosquitoes by 97 percent compared with huts sprayed with an alternative pesticide. Those mosquitoes that do enter the huts are less likely to bite.

By forbidding DDT and relying on more expensive, less effective methods of prevention, we are causing immense hardship. Small environmental losses are inferior to saving thousands of human lives and potentially increasing economic growth in developing nations.

We do not yet have data on the economic effects of the Zika virus, but we know that countries with a high incidence of malaria can suffer a 1.3 percent annual loss of economic growth. According to a Harvard/WHO study, sub-Saharan Africa’s GDP could be $100 billion greater if malaria had been eliminated 35 years ago.

Rachel Carson died in 1964, but the legacy of Silent Spring and its recommended ban on DDT live with us today. Millions are suffering from malaria and countless others are contracting the Zika virus as a result of the DDT ban. They were never given the choice of living with DDT or dying without it. The World Health Organization should recognize that DDT has benefits, and encourage its use in combating today’s diseases.

This article first appeared at E21, a project of the Manhattan Institute.

Diana Furchtgott-RothDiana Furchtgott-Roth

Diana Furchtgott-Roth, former chief economist of the U.S. Department of Labor, is director of Economics21 and senior fellow at the Manhattan Institute.

The Rise and Fall of American Growth by Emily Skarbek

Diane Coyle has reviewed Robert Gordon’s new book (out late January), The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War.

Gordon’s central argument will be familiar to readers of his work. In his view, the main technological and productivity-enhancing innovations that drove American growth in the early to mid 20th century — electricity, internal combustion engine, running water, indoor toilets, communications, TV, chemicals, petroleum — could only happen once, have run their course, and the prospects of future growth look uninspiring. For Gordon, it is foreseeable that the rapid progress made over the past 250 years will turn out to be a unique episode in human history.

Coyle zeros in on the two main mechanisms to which Gordon attributes the slowing of growth. The first is that future innovation will be slower or its effects less important. Coyle finds this argument less convincing.

What I find odd about Gordon’s argument is his insistence that there is a kind of competition between the good old days of ‘great innovations’ and today’s innovations – which are necessarily different.

One issue is the extent to which he ignores all but a limited range of digital innovation; low carbon energy, automated vehicles, new materials such as graphene, gene-based medicine etc. don’t feature.

The book claims more recent innovations are occurring mainly in entertainment, communication and information technologies, and presents these as simply less important (while making great play of the importance of radio, telephone and TV earlier).

While I have yet to read the book, Gordon makes several similar arguments in an NBER working paper. There he gives a few examples of his view of more recent technological innovations as compared to the Great Inventions of the mid-20th century.

More familiar was the rapid development of the web and ecommerce after 1995, a process largely completed by 2005. Many one-time-only conversions occurred, for instance from card catalogues in wooden cabinets to flat screens in the world’s libraries and the replacement of punch-hole paper catalogues with flat-screen electronic ordering systems in the world’s auto dealers and wholesalers.

In other words, the benefits of the computer revolution were one time boosts, not lasting increases in labor productivity. Gordon then invokes Solow’s famous sentence that “we [could] see the computers everywhere except in the productivity statistics.” When the effects do show up, Gordon says, they fade out by 2004 and labor productivity flat lines.

Solow’s interpretation (~26 mins into the interview) of where the productivity gains went is different, and more consistent with Coyle’s deeper point. In short, the statistics themselves doesn’t capture the full gains from innovation:

And when that happened, it happened in an interesting way. It turned out when there were first clear indications, maybe 8 or 10 years later, of improvements in productivity on a national scale that could be traced to computers statistically, it turned out a large part of those gains came not in the use of the computer, but in the production of computers.

Because the cost of an item of computing machinery was falling like a stone, and the quality was at the same time, the capacity at the same time was improving. And people were buying a lot of computers, so this was not a trivial industry. …

You got big productivity gains in the production of computers and whatnot. But you could also begin to see productivity improvements on a national scale that traced to the use of computers.

Coyle’s central criticism is not just on the interpretation of the data, but on an interesting switch in Gordon’s argument:

Throughout the first two parts of the book, Gordon repeatedly explains why it is not possible to evaluate the impact of inventions through the GDP and price statistics, and therefore through the total factor productivity figures based on them — and then uses the real GDP figures to downplay modern innovation.”

Coyle’s understanding of the use and abuse of GDP figures leads her to the fundamental point:

While the very long run of real GDP figures (the “hockey stick of history”) does portray the explosion of living standards under market capitalism, one needs a much richer picture of the qualitative change brought about by innovation and variety.

This must include the social consequences too — and the book touches on these, from the rise of the suburbs to the transformation of the social lives of women.

To understand Coyle’s insights more deeply, her discussion with Russ Roberts gives a fascinating discussion of GDP (no, really!).

In my view, it seems to come down to differing views about where Moore’s Law is taking us. The exponentially increasing computational power — with increasing product quality at decreasing prices — has never happened at such a sustained pace before.

The technological Great Inventions that Gordon sees as fundamental to driving sustained growth of the past all were bursts of innovation followed by a substantial time period where entrepreneurs figured out how to effectively commodify and deliver that technology to the broader economy and society. What is so interesting about the pattern of exponential technological progress is that price/performance gains have not slowed, even as some bits of these gains have just shown signs of commodification — Uber, 3D printing, biosynthesis of living tissue, etc.

There are good reasons to think that in the past we have failed to capture all the gains from innovation in measures of total factor productivity and labor productivity, as Gordon rightly points out. But if this is true, it seems strange to me to look at the current patterns of technological progress and not see the potential for these innovations to lead to sustained growth and increases in human well-being.

This is, of course, conditional on the political economy in which innovation takes place. The second cause for low future growth for Gordon concerns headwinds slowing down whatever innovation-driven growth there might be. Here I look forward to reading the relative weights Gordon assigns to factors such as demography, education, inequality, globalization, energy/environment, and consumer and government debt. In particular, I hope to read Gordon’s own take (and others) on how the political economy environment could change the magnitude or sign of these headwinds.

The review is worth a read in advance of what will likely prove to be an important book in the debate on development and growth.

This post first appeared at Econlog, where Emily is a new guest blogger.

Emily SkarbekEmily Skarbek

Emily Skarbek is Lecturer in Political Economy at King’s College London and guest blogs on EconLog. Her website is EmilySkarbek.com. Follow her on Twitter @EmilySkarbek.​

Government Caused the ‘Great Stagnation’ by Peter J. Boettke

Tyler Cowen caused quite a stir with his e-book, The Great Stagnation. In properly assessing his work it is important to state explicitly what his argument actually is. Median real income has stagnated since 1980, and the reason is that the rate of technological advance has slowed. Moreover, the technological advances that have taken place with such rapidity in recent history have improved well-being, but not in ways that are easily measured in real income statistics.

Critics of Cowen more often than not miss the mark when they focus on the wild improvements in our real income due to quality improvements (e.g., cars that routinely go over 100,000 miles) and lower real prices (e.g., the amount of time required to acquire the inferior version of yesterday’s similar commodities).

Cowen does not deny this. Nor does Cowen deny that millions of people were made better off with the collapse of communism, the relative freeing of the economies in China and India, and the integration into the global economy of the peoples of Africa and Latin America. Readers of The Great Stagnation should be continually reminded that they are reading the author of In Praise of Commercial Culture and Creative Destruction. Cowen is a cultural optimist, a champion of the free trade in ideas, goods, services and all artifacts of mankind. But he is also an economic realist in the age of economic illusion.

What do I mean by the economics of illusion? Government policies since WWII have created an illusion that irresponsible fiscal policy, the manipulation of money and credit, and expansion of the regulation of the economy is consistent with rising standards of living. This was made possible because of the “low hanging” technological fruit that Cowen identifies as being plucked in the 19th and early 20th centuries in the US, and in spite of the policies government pursued.

An accumulated economic surplus was created by the age of innovation, which the age of economic illusion spent down. We are now coming to the end of that accumulated surplus and thus the full weight of government inefficiencies are starting to be felt throughout the economy. Our politicians promised too much, our government spends too much, in an apparent chase after the promises made, and our population has become too accustomed to both government guarantees and government largess.

Adam Smith long ago argued that the power of self-interest expressed in the market was so strong that it could overcome hundreds of impertinent restrictions that government puts in the way. But there is some tipping point at which that ability to overcome will be thwarted, and the power of the market will be overcome by the tyranny of politics. Milton Friedman used that language to talk about the 1970s; we would do well to resurrect that language to talk about today.

Cowen’s work is a subversive track in radical libertarianism because he identifies that government growth (both measured in terms of scale and scope) was possible only because of the rate of technological improvements made in the late 19th and early 20th century.

We realized the gains from trade (Smithian growth), we realized the gains from innovation (Schumpeterian growth), and we fought off (in the West, at least) totalitarian government (Stupidity). As long as Smithian growth and Schumpeterian growth outpace Stupidity, tomorrow’s trough will still be higher than today’s peak. It will appear that we can afford more Stupidity than we can actually can because the power of self-interest expressed through the market offsets its negative consequences.

But if and when Stupidity is allowed to outpace the Smithian gains from trade and the Schumpeterian gains from innovation, then we will first stagnate and then enter a period of economic backwardness — unless we curtail Stupidity, explore new trading opportunities, or discover new and better technologies.

In Cowen’s narrative, the rate of discovery had slowed, all the new trading opportunities had been exploited, and yet government continued to grow both in terms of scale and scope. And when he examines the 3 sectors in the US economy — government services, education, and health care — he finds little improvement since 1980 in the production and distribution of the services. In fact, there is evidence that performance has gotten worse over time, especially as government’s role in health care and education has expanded.

The Great Stagnation is a condemnation of government growth over the 20th century. It was made possible only by the amazing technological progress of the late 19th and early 20th century. But as the rate of technological innovation slowed, the costs of government growth became more evident. The problem, however, is that so many have gotten used to the economics of illusion that they cannot stand the reality staring them in the face.

This is where we stand in our current debt ceiling debate. Government is too big, too bloated. Washington faces a spending problem, not a revenue problem. But too many within the economy depend on the government transfers to live and to work. Yet the economy is not growing at a rate that can afford the illusion. Where are we to go from here?

Cowen’s work makes us think seriously about that question. How can the economic realist confront the economics of illusion? And Cowen has presented the basic dilemma in a way that the central message of economic realism is not only available for libertarians to see (if they would just look, or listen carefully to his podcast at EconTalk), but for anyone who is willing to read and think critically about our current political and economic situation.

The Great Stagnation signals the end of the economics of illusion and — let’s hope — paves the way for a new age of economic realism.

This post first appeared at Coordination Problem.

Peter J. BoettkePeter J. Boettke

Peter Boettke is a Professor of Economics and Philosophy at George Mason University and director of the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center. He is a member of the FEE Faculty Network.

RELATED ARTICLE: 5 Reasons Why America Is Headed to a Budget Crisis

What Can the Rich Afford that Average Americans Can’t? by Donald J. Boudreaux

Raffi Melkonian asks — as relayed by my colleague Tyler Cowen — “When can median income consumers afford the very best?”

Tyler offers a list of some of the items in the modern, market-oriented world that are as high-quality as such items get and yet are easily affordable to ordinary people. This list includes iPhones, books, and rutabagas. Indeed, this list includes nearly all foods for use in preparing home snacks and meals. I doubt very much that Bill Gates and Larry Ellison munch at home on foods — such as carrots, blueberries, peanuts, and scrambled eggs — that an ordinary American cannot easily afford to enjoy at home.

This list includes also non-prescription pain relievers, most other first-aid medicines and devices such as Band-Aids, and personal-hygiene products such as toothpaste, dental floss, and toilet paper. (I once saw a billionaire take two Bayer aspirin — the identical pain reliever that I use.) This list includes also gasoline and diesel. Probably also contact lenses.

A slightly different list can be drawn up in response to this question: When can median-income consumers afford products that, while not as high-quality as those versions that are bought by the super-rich, are nevertheless virtually indistinguishable — because they are quite close in quality — to the naked eye from those versions bought by the super-rich?

On this list would be most clothing. For example, an ordinary American man can today afford a suit that, while it’s neither tailor-made nor of a fabric as fine as are suits that I suspect are worn by most billionaires, is nevertheless close enough in fit and fabric quality to be indistinguishable by the naked eye from expensive suits worn by billionaires. (I suspect that the same is true for women’s clothing, but I’m less expert on that topic.)

Ditto for shoes, underwear, haircuts, corrective eye-wear, collars for dogs and cats, pet food, household bath towels and “linens,” tableware and cutlery, automobile tires, hand tools, most household furniture, and wristwatches.

(You’d have to get physically very close to someone wearing a Patek Philippe — and you’d have to know what a Patek Philippe is — in order to determine that that person’s wristwatch is one that you, an ordinary American, can’t afford. And you could stare at that Patek Philippe for months without detecting any superiority that it might have over your quartz-powered Timex at keeping time.)

Coffee. Tea. Beer. Wine. (There is available today a large selection of very good wines at affordable prices. These wines almost never rise to the quality of Chateau Petrus, d’yquem, or the best Montrachets, but the differences are often quite small and barely distinguishable save by true connoisseurs.)

Indeed, the more one ponders this question relayed by Tyler, the more one suspects that the shorter list would be one drawn up in response to this question: When can high-income consumers afford what median-income consumers cannot?

Such a list, of course, would be far from empty. It would include private air travel, beachfront homes, regular vacations in Tahiti and Davos, private suites at sports arenas, luxury automobiles, rooms at the Ritz, original Picassos and Warhols. (It would, by the way, include also invitations to White House dinners and private lunches with rent-creating senators, governors, and mayors.)

But I’ll bet that this latter list would be shorter than one made up in response to the question relayed by Tyler combined with one drawn up in response to the question that I pose above in the third paragraph (call this list “the combined list”).

And whether shorter or not, what other germane characteristics might distinguish the items on this last list from the combined list?

A version of this post first appeared at Cafe Hayek.

Donald J. BoudreauxDonald J. Boudreaux

Donald Boudreaux is asenior fellow with the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center at George Mason University, a Mercatus Center Board Member, a professor of economics and former economics-department chair at George Mason University and, a former FEE president.

Regulators Are Not Heroes by Adam C. Smith & Stewart Dompe

Amazon is suing thousands of “fake” reviewers, who, for a fee, have posted positive reviews for various products. These pseudo reviews violate the spirit — and possibly the functionality — of Amazon’s largely self-governed rating system. Customers rely on reviews to guide their own choices, and a wave of sponsored reviews can mislead them into choosing inferior products.

A similar theme plays out in George Akerlof and Robert Shiller’s newest behavioral economics-cum-self-help book, Phishing for Phools. The authors, both Nobel laureates, claim that an unregulated market leads to massive amounts of manipulation and deception. Just how much remains unspecified, but the general thrust of the argument is that regulatory heroes are needed to rein in villainous dealers.

Heroic Regulators?

It is no surprise then that the authors favor heroic efforts of an older progressive sort, such as the works of Alice Lakey or her modern-day counterpart Elizabeth Warren. Their work, respectively, led to the establishment of the Food and Drug Administration and the Consumer Financial Protection Bureau. These progressives are seen as heroic for taking “action not selfishly but for the public good.” The trouble with such heroes, however, is that they invariably focus not on educating consumers so that they may make better choices but on corralling the cat herd of bureaucrats and politicians into ever-expanding spheres of regulation.

While it is true that consumer regulation can provide focal points that help buyers and sellers interact — in fact, Amazon appealed to just that in its lawsuit — this truth nevertheless misses the pivotal point (and an awkward one for Akerlof and Shiller) that it is Amazon that is working to resolve the problem, not government regulators.

Make no mistake. Akerlof’s classic paper on the quality of goods in a world of imperfect information clearly outlines a problem that markets must address, but it is a problem for both consumers and the market platforms on which they participate. Those platforms have a natural incentive to promote the information consumers need in order to make more informed decisions. The incentives faced by regulators are less well aligned with consumers’ interests. (But advocates of regulation rarely ask what incentives drive government regulators.)

There is another aspect of Akerlof’s model that is telling in this regard: in equilibrium, the so-called “lemons market” should unravel as more and more consumers become frustrated with ever-decreasing levels of quality. Thus, the market platform should topple over. The trouble with this theoretical outcome is that it again fails to account for the empirical observation that it is markets that are solving market problems.

Akerlof’s co-recipient of the 2001 Nobel Prize, Michael Spence, would have no trouble with this observation. Spence noted that it is far more interesting to compare the outcomes in the market to what is possible in a world of incomplete information, not to what is found where no imperfection exists by assumption. Spence explained in his Nobel address that when facing a world of imperfect information, the asymmetry between buyer and seller “cannot be simply removed by a wave of the pen.”

Compared to What?

Even when we acknowledge that individuals may be limited in their analytical and decision-making capabilities, we must ask ourselves, “Compared to what?” As noted elsewhere in these pages, every flaw in consumers is worse in voters. Furthermore, the immediate call for greater government regulation ignores the ongoing knowledge problem: acquiring information is limited by the abilities of normal people (after all, we can’t all be heroes). Knowing which transactions to avoid is valuable information, but that knowledge must first be discovered to be shared. If this information is not readily attainable, then it is unclear how regulators will know what market processes to target, much less how to improve on them.

And if the information does exist, then there is an opportunity for entrepreneurial action to gather this information and sell it to consumers. Put another way, market failures that cause individuals to make poor decisions are themselves profit opportunities for entrepreneurs to help people make better decisions.

In a world of uncertainty, ensuring quality can be a powerful competitive advantage. Amazon wants you, the customer, to use its search and recommendation system to buy new products, products that you cannot physically touch and inspect. The review system is one method of overcoming this informational asymmetry. When the integrity of the review system is challenged, Amazon is faced with the prospect of a lower volume of transactions and therefore lower profits.

Private Heroes

This is why Amazon is acting to curtail its rogue members. Retailers can only justify high prices when they can guarantee quality. Amazon’s feedback system constitutes a significant informational subsidy to its users, and the company is willing to create this information (or have it created by users) because it leads to a higher volume of trade and the accompanying consumer benefits that Amazon brings to book readers worldwide.

What Akerlof and Shiller miss is that creating and maintaining a viable platform for trade opportunities is enormously expensive. Having customers exit the door to never return — or perhaps write negative Yelp reviews — causes instability to the market that can be fatal if left unattended.

Rather than focusing on the failure of consumers, the original sin of our humanity, we should instead notice how information entrepreneurs are enabling us to make better choices. The information revolution led by these innovators has changed the world with the costs of distribution lower than ever.These may not be the welfarist heroes of Akerlof and Shiller’s fantasy world but market troubleshooters of the one we actually occupy.

Public-spirited regulators may be the heroes we want, but they are not the heroes we need.

Adam C. Smith

Adam C. Smith

Adam C. Smith is an assistant professor of economics and director of the Center for Free Market Studies at Johnson & Wales University. He is also a visiting scholar with the Regulatory Studies Center at George Washington University and coauthor of the forthcoming Bootleggers and Baptists: How Economic Forces and Moral Persuasion Interact to Shape Regulatory Politics.

Stewart Dompe

Stewart Dompe

Stewart Dompe is an instructor of economics at Johnson & Wales University. He has published articles in Econ Journal Watch and is a contributor to Homer Economicus: Using The Simpsons to Teach Economics.

Everyone Is Talking about Bitcoin by Jeffrey A. Tucker

I’m getting a flurry of messages: how do I buy Bitcoin? What’s the best article explaining this stuff? How to answer the critics? (Might try here, here, here, and here.)

Markets can be unpredictable. But the way people talk about markets is all too predictable.

When financial assets go up in price, they become the topic of conversation. When they go way up in price, people feel an itch to buy. When they soar to the moon, people jump in the markets — and ride the price all the way back down.

Then while the assets are out of the news, they disappear from the business pages and only the savviest investors buy. Then they ride the wave up.

This is why smart money wins and dumb money loses.

Bitcoin Bubbles and Busts

It’s been this way for seven years with Bitcoin. When the dollar exchange rate is falling, people get bored or even disgusted. When it is rising, people get interested and excited. The challenge of Bitcoin is to see through the waves of hysteria and despair to take a longer view.

In the end, Bitcoin is not really about the dollar exchange rate. It is about its use as a technology. If Bitcoin were only worth a fraction of a penny, the concept would already be proven. It demonstrates that money can be a digital product, created not by government or central banks but rather through the same kind of ingenuity that has already transformed the world since the advent of the digital age.

When the Bitcoin white paper came out in October 2008, only a few were interested. Five years would pass before discussion of the idea even approached the mainstream. Now we see the world’s largest and most heavily capitalized banks, payment processing companies, and venture capitalists working to incorporate Bitcoin’s distributed ledger into their operations.

In between then and now, we’ve seen wild swings of opinion among the chattering classes. When Bitcoin hit $30 in February 2013, people were screaming that it was a Ponzi-like bubble destined to collapse. I’ve yet to see a single mea culpa post from any of these radical skeptics. It’s interesting how the incessantly wrong slink away, making as little noise as possible.

For the last year, the exchange rate hovered around $250, but because this was down from its high, people lost interest. What is considered low and what is considered high are based not on fundamentals but on the direction of change.

What Is the Right BTC Price?

The recent history of cryptocurrency should have taught this lesson: No one knows the right exchange rate for Bitcoin. That is something to be discovered in the course of market trading. There is no final answer. The progress of technology and the shaping of economic value knows no end.

On its seventh birthday, Bitcoin broke from its hiatus and has spiked to over $350, on its way to $400. And so, of course, it is back in the news. Everyone wants to know the source of the last price run up. There is speculation that it is being driven by demand from China, where bad economic news keeps rolling in. There has also been a new wave of funding for Bitcoin enterprises, plus an awesome cover story in the Economist magazine.

Whatever the reason, this much is increasingly clear: Bitcoin is perhaps the most promising innovation of our lifetimes, one that points to a future of commodified, immutable, and universal information exchange. It could not only revolutionize contracting and titling. It could become a global currency that operates outside the nation state and banking structures as we’ve known them for 500 years. It could break the model of money monopolization that has been in operation for thousands of years.

Technology in Fits and Starts

Those of us in the Bitcoin space, aware of the sheer awesomeness of the technology, can grow impatient, waiting for history to catch up to technical reality. We are daily reminded that technology does not descend on the world on a cloud in its perfected form, ready for use by the consuming public. It arrives in fits and starts, is subjected to trials and improvement, and its applications tested against real world conditions. It passes from hand to hand in succession, with unpredictable winners and losers.

Successful technology does not become socially useful in the laboratory. Market experience combined with entrepreneurial risk are the means by which ideas come to make a difference in the world at large.

Bitcoin was not created in the monetary labs of the Federal Reserve or banks or universities. It emerged from a world of cypherpunks posting on private email lists — people not even using their own names.

In that sense, Bitcoin had every disadvantage: No funding, no status, no official endorsements, no big-name boosters. It has faced an ongoing flogging by bigshots. It’s been regulated and suppressed by governments. It’s been hammered constantly by scammers, laughed at by experts, and denounced by moralists for seven straight years.

And yet, even given all of this, it has persisted solely on its own merits. It is the ultimate “antifragile” technology, growing stronger in the face of every challenge.

What will be the main source of Bitcoin’s breakout into the mainstream? Commentary trends suggest it will be international remittances. It is incredible that moving money across national borders is as difficult and expensive as it is. With Bitcoin, you remove almost all time delays and transaction costs. So it is not surprising that this is a huge potential growth area for Bitcoin.

The Economist takes a different direction. It speculates that Bitcoin technology will be mostly useful as a record-keeping device. It is “a machine for creating trust.”

One idea, for example, is to make cheap, tamper-proof public databases — land registries, say, (Honduras and Greece are interested); or registers of the ownership of luxury goods or works of art. Documents can be notarised by embedding information about them into a public blockchain — and you will no longer need a notary to vouch for them.

Financial-services firms are contemplating using blockchains as a record of who owns what instead of having a series of internal ledgers. A trusted private ledger removes the need for reconciling each transaction with a counterparty, it is fast and it minimises errors.

We Need Bitcoin 

No one knows for sure. What we do know is that we desperately need this as a tool to disintermediate the world, liberating us from the governments that have come to stand between individuals and the realization of their dreams.

In 1974, F.A. Hayek dreamed of a global currency that operated outside governments and central banks. If governments aren’t going to reform money, markets would need to step up and do it themselves. Bitcoin is the most successful experiment in this direction we’ve yet seen.

And that is true whether or not your friends and neighbors are talking about it.

Jeffrey A. Tucker

Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup Liberty.me, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World.  Follow on Twitter and Like on Facebook.