At Fukushima, Fear Was More Deadly than Radiation by Daniel Bier

The precautionary principle (“better safe than sorry”) is a maxim embraced by government planners and regulators the world over, from GMOs to pharmaceuticals to the environment. The argument is that it’s better to act on fears preemptively than it is to “do nothing” and wait until there’s a problem.

But often, overreaction can be more costly than the original problem. Precaution becomes panic, and moderating risk devolves into the blind urge to “just do something.”

An article by George Johnson in the New York Times explains the deadly consequences of the Japanese government’s panicky stampede after the 2011 Fukushima nuclear accident.

No one has been killed or sickened by the radiation — a point confirmed last month by the International Atomic Energy Agency. Even among Fukushima workers, the number of additional cancer cases in coming years is expected to be so low as to be undetectable, a blip impossible to discern against the statistical background noise.

But about 1,600 people died from the stress of the evacuation — one that some scientists believe was not justified by the relatively moderate radiation levels at the Japanese nuclear plant. …

“The government basically panicked,” said Dr. Mohan Doss, a medical physicist who spoke at the Tokyo meeting, when I called him at his office at Fox Chase Cancer Center in Philadelphia. “When you evacuate a hospital intensive care unit, you cannot take patients to a high school and expect them to survive.”

Among other victims were residents of nursing homes. And there were the suicides. “It was the fear of radiation that ended up killing people,” he said.

Doss estimates that in the hot spots, with the highest levels of radioactivity, residents would have gotten 70 millisieverts of radiation over four years (a dose equal to one full body scan a year).

But those hot spots were anomalies.

By Dr. Doss’s calculations, most residents would have received much less, about 4 millisieverts a year. The average annual exposure from the natural background radiation of the earth is 2.4 millisieverts. …

A full sievert of radiation is believed to eventually cause fatal cancers in about 5 percent of the people exposed. Under the linear no-threshold model, a millisievert would impose one one-thousandth of the risk: 0.005 percent, or five deadly cancers in a population of 100,000.

About twice that many people were evacuated from a 20-kilometer area near the Fukushima reactors. By avoiding what would have been an average cumulative exposure of 16 millisieverts, the number of cancer deaths prevented was perhaps 160, or 10 percent of the total who died in the evacuation itself.

That would be bad enough, but it’s not clear if the “linear model” of radiation exposure is even accurate. The assumption is that all radiation is equally bad for you, and there’s no safe level of exposure, so your risk is in exact proportion with the level of exposure.

But Doss and others think low levels of radiation are not proportionally as bad for you as high levels, meaning that half the radiation is less than half as dangerous. In other words, a millisievert is not a thousandth as deadly as a full sievert — it’s much less than that, and it might not be bad for you at all.

“Better safe than sorry” sounds reasonable, but it can’t answer the question better for whom? When we’re talking about government policy, we have to remember that politicians and officials are not robots, blindly calculating the public good: they are people, with their own interests and incentives.

When there is a crisis (real or imagined), officials need to appear to do something. To keep us safe. To protect us from scary things, like radiation, toxins, and terrorists. That incentive is not identical to (and often not even compatible with) a rational cost-benefit analysis.

The evacuation of Fukushima was better for the officials in charge — they weredoing something — but it wasn’t safer for the people who died after being forcibly displaced.

There’s no perfect solution here. But it would have been much better to give people the freedom to move at their own pace and let them make informed choices about the risks, instead a rushed, terrifying evacuation from an inflated threat.

Johnson concludes, “We’re bad at balancing risks, we humans, and we live in a world of continual uncertainty. Trying to avoid the horrors we imagine, we risk creating ones that are real.”

I might add that when we mix them with politics, we risk inflicting them on everyone.

Daniel Bier

Daniel Bier

Daniel Bier is the editor of Anything Peaceful. He writes on issues relating to science, civil liberties, and economic freedom.

Fossil Fuels Are the Lifeblood of Civilization by Aaron Tao

I never thought I would encounter a book titled The Moral Case for Fossil Fuels. After all, in this day and age, it is the politically correct and fashionable trend for activists, media, politicians, and even the Pope to call upon each and every one of us to break our “addiction” to oil.

For as long as I can remember, my science classes from grade school through college carried some variation of the environmental message that warns of doom to future generations and our planet unless we embrace “sustainability” and drastically change our patterns of production and consumption. If we do not curb our usage of resource X and reduce humanity’s “impact” on the Earth, apocalyptic scenarios from overpopulation to reaching “peak oil” were bound to become reality.

But it’s now 2015, and the “population bomb” did not go off. And by every indication, we are nowhere close to running out of petroleum anytime soon (largely thanks to the shale revolution). Perhaps most astoundingly, even as human populations have grown dramatically and increased their use of fossil fuels, the world has become a much better place. This is the message that Alex Epstein emphasizes in his well-written, persuasively argued book.

What distinguishes Epstein’s work from so many other debates on climate change and energy policy is that his thesis gets to the core of the discussion: It is a moral argument. In making his case, Epstein presents a concrete and specific argument using human life and wellbeing as his moral standard of value:

What will promote human life? What will promote human flourishing — realizing the full potential of life? Colloquially, how do we maximize the years in our life and the life in our years?

Using this standard, Epstein clearly articulates the terms of the debate and lays out the costs and benefits of using fossil fuels versus alternatives.

He reminds us that fossil fuels are still the only source of abundant, cheap, and reliable energy (solar, wind, biofuels, and other “renewables” all fail in one or more of these categories), and that fossil fuel use is essential to industrial civilization and, in fact, made it all possible from the beginning.

Industrialization is what created the wealth and high living standards of the West. Today, China and India have experienced rapid economic growth and reduction of absolute poverty thanks to industrialization and the move toward freer markets. In short, billions managed to escape lives of misery imposed by Malthusian privation.

Furthermore, the countries that industrialized through increased use of fossil fuels saw not only a surge in economic prosperity, but also benefits such as increased life expectancy, cleaner air, cleaner water, decreased malnutrition, fewer deaths from infectious disease, and fewer climate-related deaths.

The dramatic improvement in both environment and climate thanks to increased fossil fuel use is counterintuitive for many, but Epstein marshals an impressive array of data from respected institutional sources to highlight these trends.

Screen Shot 2015-09-29 at 11.53.03 AMScreen Shot 2015-09-29 at 11.51.53 AMScreen Shot 2015-09-29 at 11.52.07 AMScreen Shot 2015-09-29 at 11.52.25 AMScreen Shot 2015-09-29 at 11.53.47 AMScreen Shot 2015-09-29 at 11.52.47 AM

Of course, detractors will almost certainly retort that the “greatest threat” of our generation, global climate change, will undo all this progress, and that the continued use of fossil fuels will hasten the inevitable day of judgment.

Epstein does not dispute the well-documented greenhouse effect from carbon dioxide emissions. However, he points out that the effect is logarithmic: Increasing carbon dioxide has resulted in a decreasing rate of warming.

Screen Shot 2015-09-29 at 11.53.12 AMFurthermore, Epstein raises crucial questions regarding the reliability of computer models in predicting future climate. For all intents and purposes, these speculative models have been failures.

Screen Shot 2015-09-29 at 11.53.18 AMDrawing upon his philosophy background, Epstein devotes a number of pages to explaining how people should find the truth by “treating experts not as authority figures to be obeyed but as advisors to one’s own independent thought process and decision making.”

(He makes only a passing reference to the field of nutrition, noting that it “can literally be deadly for a scientist to spread a hypothesis as fact” after government adopts it as gospel with disastrous social consequences. Knowing this story in detail, I’m glad he did.)

In the spirit of F.A. Hayek, Epstein reminds us that honest and responsible experts are straight with the public about what they know and how they know it, and that they freely acknowledge uncertainties and limits of their knowledge, especially in a field as complex as climate science. Unfortunately, history has shown that experts have too often understated the benefits of fossil fuels while greatly overestimating their costs.

One particular positive impact of increased carbon dioxide (more plant food!) in the Earth’s atmosphere is the explosive growth in vegetation all over the world. Literally, the planet has grown greener thanks to fossil fuel use. Yet, this beneficial “fertilizer effect” is virtually unmentioned in the public debate on climate change.

Unfortunately, when it comes to evaluating costs and benefits of a technology, “those who predict the most risk get the most attention from the media and from politicians who want to ‘do something.’”

Despite a history of failed apocalyptic predictions, environmentalist Bill McKibben, ecologist Paul Ehrlich, and other “experts” opposed to fossil fuels continue to endorse heavy limits if not outright bans on their use, as well as draconian population control measures to curb humans’ carbon footprint.

Going back to the moral argument, Epstein notes that the modern environmental movement argues on a completely different plane from his standard of human life. Instead of evaluating whether a given policy or technology benefits humanity, “green” activists tend adhere to the idea that “non-impact on nature is the standard of value.”

Thus, it’s really no surprise a strong anti-capitalist, anti-industrial, and misanthropic prejudice pervades the modern environmental movement.

Epstein decries the romanticizing of Mother Nature as a benevolent entity that looks after her children with their best interests at heart:

The natural environment is not naturally a healthy, safe place; that’s why human beings historically had a life expectancy of thirty. Absent human action, our natural environment threatens us with organisms eager to kill us and natural forces, including natural climate change, that can easily overwhelm us.

It is only thanks to cheap, plentiful, reliable energy that we live in an environment where the water we drink and the food we eat will not make us sick and where we can cope with the often hostile climate of Mother Nature.

Energy is what we need to build sturdy homes, to purify water, to produce huge amounts of fresh food, to generate heat and air-conditioning, to irrigate deserts, to dry malaria-infested swamps, to build hospitals, and to manufacture pharmaceuticals, among many other things.

That source of energy, of course, is fossil fuels. Only by transforming nature did human beings manage to create to a safer environment, boost productivity, and raise living standards.

The ability of humans to innovate and adapt to their ever-changing environments was made possible by fossil fuels, and our continuing progress as a civilization — including our lifting of billions out of dire poverty across the world — requires abundant, cheap, and reliable fossil fuel energy.

The Moral Case for Fossil Fuels is a much-needed counterpoint to the grossly one-sided ideological environmental crusade that too many people consider respectable and “mainstream.” I can’t thank Epstein enough for adding this valuable contribution to the public dialogue.

We can only hope that more students, journalists, and government officials read this book, understand the big picture, and boldly take a stand for human civilization against the misanthropic forces that would all but tear it apart.

This review first appeared at The Beacon.

Aaron Tao
Aaron Tao

Aaron Tao is the Marketing Coordinator and Assistant Editor of The Beacon at the Independent Institute. Follow him on Twitter here.

License to Kill: Wind and Solar Decimate Birds and Bats

According to a study in the Wildlife Society Bulletin, every year 573,000 birds (including 83,000 raptors) and 888,000 bats are killed by wind turbines — 30 percent higher than the federal government estimated in 2009, due mainly to increasing wind power capacity across the nation.[i] This is likely an underestimate because these estimates were based on 51,630 megawatts of installed wind capacity in the United States in 2012 and wind capacity has grown since then to 65,879 megawatts. And, at one solar power plant in California, an estimated 3,500 birds died in just the plant’s first year of operation.[ii]

Over the past five years, about 2.9 million birds were killed by wind turbines. That compares to about 800,000 birds that a Mother Jones Blog estimated to have been killed by the BP oil spill that occurred in April 2010[iii]—5 years ago–despite not all of them showing visible signs of oil.[1] Nevertheless, BP was fined $100 million for killing and harming migratory birds due to that oil spill. In comparison, the nation’s wind turbines killed more than 3 times the number of birds than did the BP oil spill over the past 5 years. And, wind turbines routinely kill federally protected birds and eagles.

Since the study estimating bird and bat deaths was completed based on 2012 wind capacity data, U.S. companies have installed more wind power due to federal and state incentives such as the Production Tax Credit that provides 2.3 cents per kilowatt hour of wind generated power over the first ten years of operation. Since 2012, the United States added over 14,000 megawatts of additional wind capacity with total wind capacity at 65,879 megawatts as of the end of 2014 — 16 times higher than wind capacity in 2001.[iv]

The Ivanpah Solar Power Plant

The Ivanpah solar power plant is a 377 megawatt solar facility located in the Mojave Desert in California and is owned by Google, BrightSource Energy, and NRG Energy. The facility has 350,000 heliostat mirrors that reflect heat toward central towers and scorch hundreds of birds in midair—turning birds into “streamers.” Ivanpah is the largest power tower project in the world and it has received a $1.6 billion loan guarantee from the Department of Energy.

The mirrors at Ivanpah span across an area four times the size of New York’s Central Park and focus sunlight onto receivers atop three 45-story power towers, boiling a liquid that turns turbines to create electricity. Fish and Wildlife Service officials warned that Ivanpah may act as a “mega-trap,” where insects attract small birds that are killed or incapacitated by the solar flux. Those birds attract larger predators thereby creating a food chain vulnerable to injury and death.[v]

The facility is estimated to have killed 83 different species of birds. The most commonly killed birds were mourning doves (14 percent of fatalities), followed by yellow-rumped warblers, tree swallows, black-throated sparrows and yellow warblers. Of the birds that died from known causes, about 47 percent died from being toasted by the heat of the solar flux. Just over half of the known deaths were attributed to collisions.

Ivanpah is testing ways to reduce bird deaths, including with software to reposition the heliostats to reduce the level of elevated flux and minimize collisions; installation of light-emitting diodes that are not attractive to insects and help reduce the prey base for birds; anti-perching devices; and the use of avian deterrents like foul smells and the sounds of predators.

Fines for Killing Birds

Besides BP being fined $100 million for killing and harming migratory birds during the 2010 Gulf oil spill, in 2009, Exxon Mobil paid $600,000 for killing 85 birds in five states and PacifiCorp, which operates coal plants, paid more than $10.5 million for electrocuting 232 eagles that landed on power lines at its substations. The first wind farms to be fined took place in November 2013 when Duke Energy paid a $1 million fine for killing 14 eagles and 149 other birds at two wind farms in Wyoming from 2009 to 2013.[vi] To date, no solar facilities have been fined. The fines are related to protections in the Migratory Bird Treaty Act and the Bald and Golden Eagle Protection Act. The death of an eagle or other protected bird is a violation of federal law, unless a company has a federal permit.[vii]

The Obama Administration on December 9, 2013, finalized a regulation that allows wind energy companies and others to obtain 30-year permits to kill eagles without prosecution by the federal government. The American Bird Conservancy filed suit in federal court against the Department of the Interior, charging it with multiple violations of federal law. [viii] Nonetheless, the Shiloh IV Wind Project in California, for example, received a permit from the U.S. Fish and Wildlife Service allowing it to kill eagles, hawks, peregrine falcons, owls and songs birds while not being subjected to the normal prohibitions afforded under the federal Bald and Golden Eagle Protection Act and the Migratory Treaty Act.[ix]

Other Bird and Mammal Deaths

According to a 2014 study by federal scientists in the journal The Condor: Ornithological Applications, building collisions are estimated to kill 365 million to 988 million birds annually in the United States. And, according to a 2013 report from scientists from the Smithsonian Conservation Biology Institute and FWS, stray and outdoor pet cats kill a median of 2.4 billion birds and 12.3 billion mammals, mostly native mammals like shrews, chipmunks and voles, annually. But these deaths do not excuse the wind and solar industry’s killing of birds. Unless, of course, BP and ExxonMobil should be excused as well–instead of playing hundreds of thousands of dollars in fines.


Despite bird and bat deaths at wind and solar farms, few have been fined for violating the law while oil and electric generating companies have paid heavily for such violations. It will be interesting to see if this will change as the wind and solar industries grow.

[1] In 2011, the Fish and Wildlife Service reported only 6,147 birds killed. See

[i] Daily Caller, Wind Turbines Kill More Birds Than the BP Oil Spill, April 20, 2015,

[ii] Greenwire, 3,500 birds died at Ivanpah ‘power towers’ in first year, April 24, 2015,

[iii] Mother Jones, The BP Oil Spill Happened 5 Years Ago Today. We are Still Paying the Price., April 20, 2015,

[iv] American Wind Energy Association,

[v] National Fish and Wildlife Forensics Laboratory, Avian Mortality at Solar Energy Facilities in Southern California,

[vi] Forbes, Republicans Develop an Interest in bird deaths, March 29, 2014,

[vii] The Christian Science Monitor, Eagle Deaths: Unprecedented $1 million fine for Wyoming wind farms, November 23, 2013,

[viii] American Bird Conservancy, American Bird Conservancy Sues Feds Over 30-Year Eagle Kill Rule,

[ix] Master Resource, Wind Power Slaughter, July 16, 2014,

Pope Francis’s Crusade Against Fossil Fuels by Alex Epstein

This week, Pope Francis is meeting with the President, addressing a joint session of Congress, and speaking to a crowd of over a million in Philadelphia, sharing his views in the name of concern for humanity, particularly the poor.

But the Pope is calling not for energy abundance, but energy poverty. He seems to be unable to see how much better fossil fuels have made our world. Why?

I explain in my latest Forbes column:

“The earth, our home, is beginning to look more and more like an immense pile of filth.” This was Pope Francis’s summary of his Encyclical earlier this year on the alleged destruction of our planet. The leading culprit, in his view, is humanity’s use of fossil fuels, which he believes are immoral and should largely be illegal.

“Like no pope before him,” according to the New York Times, “Francis is using the grand stage of his trip to the United States to demonstrate that the church exists to serve the poor and marginalized.”

But if he wants to help humanity, especially the poorest human beings, Pope Francis needs to recognize that fossil fuels make Earth not a “pile of filth,” but a far better, healthier, cleaner, and more bountiful place to live.

Read More

Power Hour: Dr. Patrick Moore on a Rational, Pro-Human Approach to Ecology

On the latest episode of Power Hour I talk to Patrick Moore, ecologist and a co-founder (and defector) of Greenpeace about how the science of ecology has been corrupted over the decades, and how it can be fixed.

Download Episode 112 with Dr. Patrick Moore

As always, if you’d like to suggest a new guest for Power Hour, or have me appear on your show, you can send me an email at, or just reply to this one.

Speaking in Pennsylvania, Massachusetts, Kansas and More

I’ve spoken at a lot of events to a wide variety of audiences this month, from the Pennsylvania Coal Alliance to the all-girls Wellesley College in Boston.

On the right is my talk at the Eastern Kansas Oil and Gas Association last week. One highlight: I was introduced by my Duke classmate David Powell, whom I met back in 2000.


AlexAlex Epstein, President and Founder of the Center for Industrial Progress, is the author of The Moral Case for Fossil Fuels and an expert on energy and industrial policy. Called “most original thinker of the year” by political commentator John McLaughlin, he champions the use of fossil fuels like coal, oil, and natural gas and has changed the way thousands of people think about energy. He has risen to prominence as the nation’s leading free-market energy debater, promoting a philosophy that is “anti-pollution but pro-development.” He challenges many popularly held ideas about energy, industry, and the environment, including the big picture benefits (and costs) of fossil fuels and nuclear power. He draws on cutting-edge research and original insights to offer an alternate perspective on the energy debate and shares eye-opening thoughts into how fossil fuels and technology will improve the lives of people – safely, cleanly, and effectively – for years to come.


Pope Francis Endorses Obama’s Climate Agenda, Which Critics Say Will Be ‘Devastating’ to the Poor

As Pope Francis Visits the US, Here Are 5 Facts About American Catholics

VIDEO: Wind Turbines Can Harm Humans

Carmen Krogh at Ideacity Conference in Toronto on wind turbines.

To learn more please read “Wind Turbines can Harm Humans: A Case Study” by Carmen M.E. Krogh, Roy D. Jeffry, Jeff Aramini and Brett Horner.


UK PM Cameron: “Get Rid of the Green Crap

Will President Obama’s Regulations Move U.S. Industries Offshore?

The following analysis is by the Institute for Energy Research:

When energy prices in the United States were high, the nation saw an exodus of companies moving offshore to obtain lower operating costs. Those industries have been slowly moving back, as hydraulic fracturing has dramatically lowered the cost of natural gas in the United States and allowed natural gas generation to compete with coal in the electricity sector. Unfortunately, President Obama’s regulations are going to make energy much more expensive in the United States, as his so-called “Clean Power Plan” and his methane rule get implemented.

The so-called Clean Power Plan is expected to decrease carbon dioxide emissions in the generating sector by 32 percent from 2005 levels by 2030. To do this, massive amounts of coal-fired generating capacity will be shuttered and wind and solar power will be built in their stead—technologies that cost 2 to 4 times more than the coal capacity that is being shuttered. According to the Energy Information Administration (EIA), residential electricity prices are expected to be 16 percent higher in real prices than today due to the proposed regulation and others imposed on the generating sector by EIA.

The methane rule will force oil and natural gas producers to reduce their methane emissions by 40 to 45 percent from 2012 levels by 2025.[i] This is a daunting task, considering the oil and gas industry has already reduced methane emissions from natural gas production by 38 percent between 2005 and 2013—despite increasing gas production by 35 percent over that time period.

These regulations and others promulgated by President Obama’s EPA will increase the cost of energy to Americans. President Obama is finalizing these regulations so that he can tell the world how he intends to reduce U.S. greenhouse gas emissions at the United National Climate Conference in Paris in December. However, the reductions that the United States makes will be insignificant to any realized temperature change and an equivalent amount of emissions will be released by China in a matter of days—for essentially no net gain globally.

Manufacturing Industry Exodus

In 2005, when natural gas prices were almost 50 percent higher than they are today, there was a general exodus of companies leaving the United States and moving their manufacturing operations to Asia to reduce costs. However, since then, hydraulic fracturing has enabled the extraction of natural gas from shale formations, lowering the price of natural gas and increasing its production substantially. An accounting firm, PricewaterhouseCoopers, believes that these lower U.S. energy prices could result in one million more manufacturing jobs as firms build new factories here. Companies such as Dow Chemical and Vallourec, a French steel-tubes firm, have announced new investments in America to take advantage of low gas prices and to supply extraction equipment.[ii]

Examples of firms bringing back manufacturing operations to the United States range from tiny firms to large firms, such as General Electric, which moved manufacturing of washing machines, refrigerators and heaters from China to a factory in Kentucky, which at one time had been expected to close. Another firm, Caterpillar, is opening a new factory in Texas to make excavators, but it still plans to expand its research and development activities in China.

A survey of American manufacturing companies by the Boston Consulting Group in April 2012 indicated that 37 percent of companies with annual sales above $1 billion said they were planning or actively considering shifting production facilities from China to America. Forty-eight percent of the very biggest firms with sales above $10 billion indicated that they would bring production facilities to America. The Massachusetts Institute of Technology looked at 108 American manufacturing firms with multinational operations and found that 14 percent of them had firm plans to bring some manufacturing back to America and one-third were actively considering such a move. Another study by the Hackett Group, a Florida-based firm that advises companies on offshoring and outsourcing, received similar results.

It may be ironic, but Chinese companies are now looking to manufacture in the United States. Keer, a textile company headquartered outside of Shanghai, China, is building yarn manufacturing lines in the Carolinas, bringing more than 500 jobs, due to low costs for energy, land, and cotton. The Carolinas at one time had been huge textile centers. Springs Mills in Lancaster once employed close to 20,000 people before the last textile factory closed in South Carolina in 2007. Lancaster County lost 11,000 textile jobs from 1995 to 2007. The greater Charlotte-Gastonia-Rock Hill region lost about 26,000 jobs at textile mills in the past 20 years.[iii]

But low energy prices and American ingenuity have brought manufacturing back to this country. However, all this is likely to change as President Obama’s regulations go into effect, making electricity and natural gas prices escalate, forcing companies to accept higher domestic operating costs or move offshore.

In the longer term, advanced manufacturing techniques will likely alter the economics of production, making it far less labor-intensive. Robots, for example, are already making a difference lowering the share of labor in total costs. Cheaper, more user-friendly and more dexterous robots are currently spreading into factories around the world, but these machines need energy to fuel them. And if President Obama implements regulations to raise energy costs, manufacturers will need to seek lower energy prices elsewhere, which will decrease the number of jobs in this country.

EPA’s Clean Power Plan

Early in August, EPA announced its final rule for the so-called Clean Power Plan, which reduces 32 percent of carbon dioxide emissions from the generating sector by 2030 from 2005 levels. This and other rules affecting the generating sector that have been finalized will shutter 90 gigawatts of coal-fired capacity and other fossil fuel technologies, and direct the construction of wind and solar units instead, despite the fact that it is cheaper to keep existing generating plants operating rather than building new plants. As a result, EIA expects residential electricity prices to be 16 percent higher in 2030 than they are today.

The use of low cost natural gas in the generation sector, displacing coal generation, has already reduced carbon dioxide emissions in the sector by 15 percent from 2005 levels. But, that is not a sufficient reduction for EPA. EPA wants the United States to reduce its carbon dioxide emissions from the electric generating sector by 773 million metric tons, and according to the International Energy Agency, while at the same time, China is expected to increase its carbon dioxide emissions by over 12,000 million metric tons.[iv] The U.S. reduction is expected to only reduce temperatures by 0.019 degrees Centigrade in 2100—a miniscule amount.[v]

Methane Rule

Also in August, the EPA finalized its methane rule, requiring oil and gas companies to reduce methane emissions by 40 to 45 percent from 2012 levels by 2025, despite the fact that the industry has already significantly reduced methane emissions while substantially increasing production.

According to EPA data, methane emissions from natural gas development have fallen steadily since 2005. (See red line in chart below.). The blue bars in the chart indicate natural gas production, which is rising steadily – even as less and less methane is being emitted from that production. The chart shows that net methane emissions from natural gas production fell 38 percent from 2005 to 2013 – even as natural gas production increased dramatically. Further, methane from hydraulically fractured natural gas wells fell 79 percent from 2005 to 2013.



EPA’s Ozone Rule

EPA has finalized the so-called “Clean Power Plan” and the methane rule, but other regulations are still in the works. The proposed ozone rule, for example, is expected to be the most costly regulation costing the economy $1.7 trillion in lost GDP through 2040.   [vi]

The National Ambient Air Quality Standard (NAAQS) for ground-level ozone is an outdoor air regulation established by EPA under the Clean Air Act. Ozone is a naturally occurring gas composed of oxygen molecules. Ground-level ozone occurs both naturally and results from chemical reactions between nitrogen oxides and volatile organic compounds, which are emitted from industrial facilities, power plants, vehicle exhaust, and chemical solvents.

In March 2008, the EPA lowered the 8-hour primary NAAQS for ozone to its current level of 75 parts per billion. In November 2014, the EPA proposed lowering the ozone standard to a range between 65 to 70 parts per billion. By court order, EPA must finalize the standard by October 1, 2015.

These new ozone regulations proposed by EPA will cause hundreds of counties across the country to be in violation of air laws. Out of compliance on ozone means less development, fewer jobs and the potential for significant and long-term damage to the economy. What’s worse, the new proposed ozone rules are being considered while the previous ozone regulations from 2008 have not been entirely implemented. States, counties and communities across the country are working to meet the current requirements, and a new stricter standard would result in more communities out of compliance.

According to a February 2015 economic study by the National Association of Manufacturers, a 65 parts per billion standard could reduce GDP by $140 billion, result in 1.4 million fewer jobs, and cost the average U.S. household $830 in lost consumption – each year from 2017 to 2040.[vii]


President Obama is making energy prices escalate due to stringent environmental regulations being promulgated by the EPA. Due to the timing of these regulations, most of the price increases will not be seen by the public until his second term is up. Nonetheless, the headway the United States made to bring manufacturing back to America is being threatened. The result will be a loss of jobs that we cannot afford.

[i] Atlantic, The EPA’s New Methane Rules for the Oil and Gas Industry, August 18, 2015,

[ii] The Economist, Coming home, January 19, 2013,

[iii] Charlotte Observer, Textile manufacturing returns to Carolinas—by way of China, August 8, 2014,

[iv] Institute for Energy Research,

[v] Cato,

[vi] Chamber of Commerce, Ozone National Ambient Air Quality Standards, June 29, 2015,

[vii] National Association of Manufacturers, Costliest Regulation in History Coming Soon,

EDITORS NOTE: The featured image is courtesy of Shutterstock.

How Sexist Is Your Office Temp? by Sarah Skwire

My Facebook wall is bursting with people arguing over a recent article from theWashington Post that claims that air conditioning in the office is sexist.

Women, argues Petula Dvorak, are naturally inclined to suffer more from the cold, so office thermostats set at 68 or 70 degrees keep men comfortable, but make women miserable. Her article strongly implies that this is done because men lack consideration for the comfort of others and because women are denied the power and the agency to get temperatures set where they want them.

I am a small cold woman who keeps two blankets in her office. I sympathize.

But despite my sympathy, I think Dvorak — and most of my Facebook friends — are missing an extremely important point: The fact that there are women suffering in overly air-conditioned offices is not a sign of how oppressed we are. It is a sign of how far we have come.

The economist Claudia Goldin has written persuasively about the long-term changes in women’s work over the course of the 20th century. She notes that the soaring rate of women’s labor force participation from the 1950s-1970s is part of a greater, century-long revolution. And it is that revolution that means that there are more and more women who are able to be in an office to begin with.

Once we’re in the office, we’re cold. But let’s not allow the chill to lull us to sleep. We can complain so loudly about the A/C because women are present in working environments in increasing numbers. That’s a good thing.

Dvorak gets a lot of mileage from her outrage over men’s office attire. They wear suits and ties and broadcloth shirts and are thus comfortable in air conditioning, while women dressed in seasonally-appropriate attire shiver from cold.

Why, she wonders, don’t men simply dress more appropriately?

Office dress codes are certainly part of the answer, but a larger part of the answer seems to be that women got a revolution that has missed men entirely — a revolution in dress.

Underneath her conservative suit, the working woman of the 1950s would have worn something like the Playtex Living Girdle, made of perforated rubber, and designed to produce the sleek figure required by the fashions of the time.

Rubber girdles certainly did that. But they were also hot, sweaty, and uncomfortable. Women who were freed of them by the new fashions of the ‘60s and the invention of pantyhose were nothing but grateful.

And the current generation of women — who have rejected even pantyhose as a relic of the past — are freer than ever… and colder. Ditching girdles and hose means that we have fewer layers between us and the office air conditioning. We’ve burned our foundation garments, but the fire hasn’t kept us warm.

I certainly don’t suggest returning to girdles or leaving the workplace in order to stay warm.

But I do think it’s dumb to blame the patriarchy, as represented by the guy in the next cubicle, for the fact that we’re cold.

We’re cold because we won the revolution. And now we have the power to request more equitable dress codes for our male colleagues, or to design offices with individualized climate controls, or to recognize that the world isn’t perfect, but that sometimes a little sweater can help.

Sarah Skwire
Sarah Skwire

Sarah Skwire is a senior fellow at Liberty Fund, Inc. She is a poet and author of the writing textbook Writing with a Thesis.

Israel Approves Leviathan Off Shore Gas Deal

Reuters reported that Israel has reached a deal to develop the important Leviathan offshore gas field after difficult negotiations with development partners, Houston-based Noble Energy, Inc.and Israeli Partenr, Delek Group:

Aug 13 Israel’s government said on Thursday it reached a deal that will pave the way for the development of Leviathan and two other offshore natural gas wells.

“The outline will bring Israel hundreds of billions of shekels in the coming years,” Prime Minister Benjamin Netanyahu told a news conference, saying he will present the agreement to the cabinet on Sunday for a vote.

The controversial deal initially revealed in June will allow Texas-based Noble Energy and Israel’s Delek Group to keep ownership of the largest offshore field, Leviathan. They are required to sell off other assets, including stakes in another large deposit called Tamar.

Critics say the agreement still leaves Noble and Delek with too much power since they would control most of Israel’s gas reserves.

Netanyahu, who has struggled to muster enough support for an agreement, earlier this week won crucial backing from the central bank.

What a difference a day makes. Noble Energy had threatened to walk after the narishkeit of Dr. Gilo and his Socialist minions reneged on a compromise deal last December. Now, as we have written, Israel and trilateral alliance of Cyprus and Greece can develop a major source of energy in their respective Exclusive Economic Zones in the Eastern Mediterranean and Levant Basins. good to see the spikes in trading for both Houston-based Noble Energy in early trading on the NYSE and Delek Group on the Tel Aviv Stock Exchanges.

Sometimes, as the expression goes, Ha Shem works in mysterious, yet positive ways. Kudos to patient Israeli Prime Minister Netanyahu, Energy Minister Steinitz and Bank of Israel Governor, Dr. Karnit Flug.

EDITORS NOTE: This column originally appeared in the New English Review.

Catholic Archdiocese of Chicago has $100 Million Worth of Fossil Fuel Investments

The “Green” Pope Francis seems to be a bit of a hypocrite when it comes to the Catholic Church’s investment in fossil fuels. His push to impact climate change appears to apply to everyone but the Catholic Church.

Richard Valdmanis from Reuters reports:

[S]ome of the largest American Catholic organizations have millions of dollars invested in energy companies, from hydraulic fracturing firms to oil sands producers, according to their own disclosures, through many portfolios intended to fund church operations and pay clergy salaries.

This discrepancy between the church’s leadership and its financial activities in the United States has prompted at least one significant review of investments. The Archdiocese of Chicago, America’s third largest by Catholic population, told Reuters it will reexamine its more than $100 million worth of fossil fuel investments.

“We are beginning to evaluate the implications of the encyclical across multiple areas, including investments and also including areas such as energy usage and building materials,” Betsy Bohlen, chief operating officer for the Archdiocese, said in an email.

[ … ]

Dioceses covering Boston, Rockville Centre on Long Island, Baltimore, Toledo, and much of Minnesota have all reported millions of dollars in holdings in oil and gas stocks in recent years, according to documents reviewed by Reuters.

The holdings tend to make up between 5 and 10 percent of the dioceses’ overall equities investments, similar to the 7.1 percent weighting of energy companies on the S&P 500 index, according to the documents.

The United States Conference of Catholic Bishops’ guidelines on ethical investing warn Catholics and Catholic institutions against investing in companies related to abortion, contraception, pornography, tobacco, and war, but do not suggest avoiding energy stocks.

Read more.

Will all Catholic churches, schools, hospitals and related organizations stop using fossil fuels to save the planet?

It would seem that Pope Francis has yet to walk the walk but he is good at talking the talk. To meet Pope Francis’ encyclical it would be necessary to, as Jesus did, shed all the trapping of fossil fuels.

I wonder if fossil fuels were used to cook the last supper?

Our Nuclear Energy Options — An Overview by Euan Mearns

With a few exceptions [1], environmental lobbies have tended to oppose nuclear power with a vengeance similar to their opposition to coal and natural gas. In certain quarters [2] this has changed with the promise of abundant, cheap and safe electricity that may be produced using thorium (Th) fuelled molten salt reactors. This guest post by French physicist Hubert Flocard places the status of molten salt reactor technology within the historical context of how the nuclear industry has evolved and examines some of the key challenges facing the development and deployment of this magical and elusive energy source. We have both written the extended summary below based on Hubert’s article that follows on after the summary. Hubert’s impressive bio is at the end of the post.

[1] James Lovelock, The Revenge of Gaia
[2] Baroness Worthington, Why Thorium Nuclear Power Shouldn’t be Written Off

Extended Summary

The world nuclear industry currently runs on Generation II and Generation III reactor technology. The presently active reactors (whether moderated by pressurised water – PWR – or boiling water – BWR) are said to belong to the GII generation while more modern versions such as the EPR or the AP1000 correspond to GIII. At the beginning of the twenty first century a forum was convened to establish an international collaboration to prepare the next generation of reactor technology (GIV). A number of design options were on the table (see below) among them molten salt reactors.

1) Liquid Sodium Fast Reactor (SFR)
2) Helium Cooled Fast Reactor (HeFR)
3) Liquid Lead Fast Reactor (LFR)
4) Supercritical Water Fast Reactor (SCFR)
5) Molten Salt Fast Reactor (MSFR)
6) Very High Temperature Thermal Reactor (VHTR)

With the exception of the MSFR, that is specifically designed to run on Th fuel, all other technologies will run on U fuel. It is also worth noting that 5 of the 6 designs are fast breeder reactors designed to consume any nuclear waste that they may produce and to extend the life of the global inventory of U and Th that is available to us.

Periodic table from Web Elements.

To appreciate the evolution of reactor technology it is important to understand a little bit about the natural elements on Earth which can be made to fission following the capture of neutrons. They are the actinides located at the bottom of the periodic table. Everyone has heard of uranium (U), thorium (Th) and plutonium (Pu) but are less aware of elements like protactinium (Pa), americium and curium. Some of these less common actinides do exist in nature in minute quantities for brief periods as part of the natural radioactive decay of U to Pb. Others result from the nuclear reactions happening in reactors or at laboratory accelerators.

The isotopes of interest are 235U, 238U and 232Th. Presently, the 235U isotope is by far the most useful because it is the only one which can easily be made to fission, releasing a substantial amount of energy. Thus 235U is described as fissile while 238U and 232Th are described as fertile. Today, 99.3 % of natural U is 238 and only 0.7 % is 235. That is because most of the 235U has already decayed away to stable Pb.

Out of these three isotopes only fissile 235U can be used to initiate a nuclear chain reaction such as those that occur in nuclear reactors or atomic bombs. To achieve a chain reaction it is necessary to enrich the uranium in its 235 isotope. For nuclear power, enrichment is typically about 3.7 %, i.e. a five-fold uplift in concentration as compared to natural uranium. For atomic bombs, the enrichment is much higher, but the same procedure is used, hence concern over civilian nuclear programs in certain countries.

While fissile 235U is required to initiate a chain reaction, the fertile 238U that makes up 96.3 % of the fuel participates also in the energy production since some of it is converted to fissile 239Pu. In this respect all U based reactors breed fissile fuel by tapping into the fertile resource. Breeder reactors are simply designed to breed more fissile fuel than they consume.

Three important points need to be made before continuing. The first is that an MSFR can’t start by using only 232Th. The reactor will first require that either natural 235U or man-made 239Pu be added to initiate the fission chain reaction, since fertile 232Th cannot achieve criticality on its own. The second is that the MSFR is a breeder reactor and environmentalists have in the past opposed breeder technology. In a breeder of any design, fertile 238U or 232Th isotopes are converted to fissile isotopes like 239Pu (U cycle) or 233U (Th cycle). A MSFR will run exclusively on the thorium cycle (i.e. without addition of U5 or Pu9) when it will have bred enough 233U to maintain the chain reaction. It will take time. The “clean” label that some attach to MSFRs derives from the fact that ultimately they are designed to work in a closed cycle as opposed to the present open cycle strategy adopted for most of presently active reactors. In other words, the spent fuel is reprocessed and fissioned again and again until a stable regime is reached in which as many fissile isotopes are created than are destroyed. It has little to do with the fact that 232Th is used as the breeder fuel stock. A uranium cycle fast breeder will also burn its “waste”. And as already mentioned, the idea underlying breeding is to greatly expand the fissionable resource by converting the abundant fertile isotopes (238U as well as 232Th) into the fissile variety.

This leads to a misconception about the quantities of nuclear waste generated by an MSFR. An MSFR burning 232Th fuel will not produce significantly smaller amounts of “waste” than a fast reactor burning 238U. It is just that as already detailed, recycling the breeder isotopes eventually removes them from the environment and stabilises the inventory within the reactor.

A further misconception is that MSFR technology employing 232Th as the fertile proto- fuel will eliminate risks of nuclear proliferation. While it is true that the 232Th cycle does not produce plutonium that may relatively easily be enriched to weapons grade 239Pu, it does produce 233U instead which may also be weaponised. Anyhow a 232Th MSFR started today will require either 235U or 239Pu to initiate the fission reaction. Any country with the appropriate enrichment facilities could divert the use of these isotopes and convert them to weapons grade material if they so wish. Recent history has also shown that one does not really need a reactor to manufacture a bomb. It is enough to have efficient centrifuges.

In conclusion, the technical challenges of MSFR technology need to be considered. The molten fluorine based salts that are envisaged need to work at temperatures in the region 500 to 800˚C and containment vessels and pumps need to be designed which resist erosion, corrosion and the neutron flux from this high temperature salt. An MSFR requires a fuel reprocessing plant and for the Th cycle no such plant has thus far been designed built, tested and approved by safety authorities. Finally, there are well-understood safety protocols for GII and GIII reactors. The radical new approach offered by MSFR technology means that a whole new set of safe design principles needs to be developed.

At the end of the 1960s The Oak Ridge National Laboratory built and ran an experiment MSR-E designed to pave the way for the MSFR technology. The experiment ran for 4 years. Apart from that realisation, MSFR with a thorium-based fuel is a concept yet to leave the drawing board. It is worth pursuing, but the claimed virtues of near inexhaustible resource, enhanced safety, less waste and elimination of weapons proliferation still need to be demonstrated.


There are people who believe that, within this century and probably even before 2050, nuclear energy should become a major component in the energy production system, if not for the entire world at least for a large group of countries. They point to some valuable features of nuclear energy (centralized production of electricity and/or heat, reasonably low cost of final energy, a production that can easily be adjusted to society needs, low CO2 emissions, small footprint, etc.). They are also well aware of some of its disadvantages (global bad image in the public inducing significant unpredictable political interventions, limited availability of the natural resource, radiotoxicity of the waste, plant accidents with a related risk of releasing radioactivity, security against terrorist attacks, heavy capital investment only reimbursed over a long period, etc.). They just think that given the energy and climatic problems the world is facing now or is going to face in a not too distant future, the advantages more than balance the liabilities.

However, not all these people have the same nuclear energy on their mind.

For some, the basis of a sensible nuclear program for this century must rely first on the extensive experience accumulated on thermal-fission reactors for which the terms Generation II or Generation III (shortened in GII and GIII) have been coined and second on the already significant experience gained on fast-fission reactors. “Improvement and Optimisation” is their motto while Uranium (U) is their fuel. I belong to this group to which the adjective “conservative” can certainly be attached.

GII and GIII water-cooled and water-moderated reactors are the workhorses of the present nuclear-electricity production. If not stopped for political reasons, they will be performing their job for many more decades. On the other hand, the “fast” reactors cooled with liquid sodium have been tested successfully in many countries and together have already accumulated several hundred years of operation. They have reached a prototype status and even the pre-industrial stage. The main world safety authorities have already a thorough knowledge of the related safety questions. These reactors have also demonstrated their potential on issues such as electricity production, breeding of the fuel (a key to solve a future uranium resource shortage) and waste transmutation.

However, no western world safety authority – and therefore no utility – would consider today that their safety is such that they can be deployed at the industrial level. To simplify, one can say that they have not yet demonstrated the safety level achieved by GIII reactors which is now becoming the standard. Moreover, given the present very low price of natural uranium, they are not economically competitive.

For this reason, in the middle of the first decade of this century, a forum, the “Generation IV International Forum” (GIF), was launched associating the major nuclear industrial nations of the world (with the notable exception of India – a country named “Europe” allows also some nations, such as Germany, to participate in the activities of GIF without having to state explicitly that they are a GIF member). These nations gave themselves the task of defining the next generation of nuclear fission reactors (GIV).

According to GIF, the goals assigned to GIV reactors are the following: 1) Durability which involves a better usage of the natural resource and a minimisation of waste radiotoxicity 2) Economic performance 3) Safety and availability 4) Resistance to nuclear proliferation.

GIF identified six main lines of work suitable for an international cooperation: 1) liquid Sodium Fast Reactor (SFR); 2) Helium cooled Fast Reactor (HeFR); 3) liquid Lead Fast Reactor (LFR); 4) Super Critical water Fast Reactor (SCFR); 5) Molten Salt Fast Reactors (MSFR); 6) Very high temperature thermal reactor (VHTR). Except for MSFR all systems under study envisage uranium as their fuel. The MSFR will use thorium (Th) as a major component of its fuel. Option N°6, VHTR, being a thermal reactor precludes breeding from the start and thus very long term durability as far as the uranium resource is concerned. The rationale for keeping it within GIF is that working at high temperature and thus high Carnot efficiency, such systems will considerably extend the availability of the U resource. It should be added that other thermal-reactor options using uranium fuel and either supercritical water or molten salt as coolants are also being considered on the side-lines of GIF.

As a matter of fact, the selection of the GIV reactor options reflects as much the evaluation of their intrinsic interest as the willingness of at least a fraction of the international expert community to work on them (many more nuclear options do exist). Not too surprisingly, presently, the main effort is focused on the SFR (liquid Sodium Fast Reactor) which appears closer to reach the GIF stated goals than any of its competitors. Of course, since the Fukushima accident, which has set nuclear energy research and industry on the defensive and modified its priorities, activities have considerably slowed down within the GIF.

All the GIF-retained options other than SFR can certainly be called “innovative” (as opposed to my definition of “conservative”). Among them, the one using molten salt and thorium based fuel (MSFR) has gained many supporters in the public, if not necessarily within the community of experts. I believe that some of the enthusiasm for thorium and MSFR is misplaced in view of the present scientific and technical situation – keeping in mind that I am concerned with energy production for the 21st century, not for the centuries beyond. Because the text that follows tries to show that, for me, some supporters wave too simplistic arguments, I would like to make it clear that I think that, MSFR and Thorium fuel is definitely worth both consideration and intensive research.

First, the fact that the MSFR was retained by the international community of experts working within the programme of GIF is a sure sign of its viability. Second, thorium and molten salts have an old history dating almost from the end of the second world-war and some significant advances have been made. The main achievement was realised by the Oakridge National Laboratory (ORNL) with the successful MSR-E experiment (which used uranium fuel). Then, over the seventies, ORNL teams worked on the design of the MSBR, a 1GWe system which intended to have Thorium within its fuel (the B stands for “breeder” and the “e” is here to indicate the expected electric power which is of course lower than the thermal power). However, at the beginning of the eighties, in the US-breeder competition, the MSBR system lost to the SFR. The decision was a complex one but the smaller breeding capacity of the MSBR had a part in it. At that time, a review conducted by the French utility EDF and the French nuclear atomic commission (CEA) analysed the MSBR project and concluded that nothing could be identified which would eliminate the option either from the point of view of chemical or material science, or of nuclear and thermal-hydraulics technology. There were still many difficult open questions but no obvious showstoppers.

Therefore, keeping molten salts and thorium as an open research option for the future makes sense today as it did earlier. There are many good reasons to investigate it that I am not going to enumerate. Here, after this long introduction, I will make a survey of what I believe are the false justifications (myths) and the many unsolved problems which make it doubtful that MSFR and Thorium can play a significant role in the global power generation of this century.

Some myths concerning thorium and molten salt reactors

Myth 1: specificity of an “inexhaustible” Th natural resource

Only elements of the actinide region of the Mendeleyev periodic table can be made to fission following the capture of a neutron. As they fission, in turn, they emit neutrons allowing a chain reaction to be established under appropriate physical and technical conditions. Of all the actinides which existed when the Earth was formed, about 4.65 billion years ago, only two have survived in sizeable quantities: thorium and uranium. Thorium only exists today as the isotope 232 (232Th shortened as Th2). From its very large radioactive half life, one can infer that almost all the Th2 which existed at the birth of the Earth is still around us. Natural uranium contains two isotopes 238U (U8) and 235U (U5). While half of the original U8 is still there, only 1/100th of the original U5 has survived, the rest has disappeared via natural radioactive decay processes ending at a stable Pb isotope. This is reflected in the present natural uranium composition: 99.3 % U8 and 0.7 % U5. When nuclear engineers or opponents of nuclear energy talk about a limited uranium resource, what they have in mind is U5, not natural uranium (or U8) which is a hundred times more abundant.

For nuclear engineers, Th2 and U8 which have an even number of neutrons belong to the same category: the “fertile” isotopes while U5 is said to be “fissile”. To be started, any reactor needs a fissile element. As a matter of fact, for the vast majority of reactors in activity the concentration of U5 within natural U is not sufficient, hence the need for enrichment typically up to 3.7 %. Note that a few billion years ago, the ratio U5/U8 was larger than today, so that natural reactors could operate spontaneously as happened for instance at the Oklo site in Gabon. In today’s reactors, the presence of fertile U8 within the fuel pins is also important for energy production. Indeed, a small fraction of this U8 swallows one neutron and is transmuted (in two steps) into the isotope 239 of plutonium (Pu9) which because it is also fissile can contribute to the chain reaction which ultimately produces energy. In other words some small amount of “breeding” is already occurring in thermal reactors.

Th2 is the nuclear equivalent of U8 (233U or U3 plays for Th2 the role that Pu9 plays for U8). Because there is no fissile isotope present within natural thorium, in order to start a thorium-fuelled reactor one must add first some fissile material. Since it can’t be U3 which does not exist on Earth it could be U5 (from the same natural uranium which provides the fuel of today reactors) or Pu9 (coming for instance from the burnt fuel of standard reactors) or other fissile materials to be found for instance in the radioactive waste of standard reactors. In other words Th2, like U8, only acquires the status of an energy resource when breeding is envisaged. The only available natural resource to initiate breeding is U5.

In some presentations to the public, “breeding” appears to perform a sort of miracle: “producing more fuel than was present within the input”. It should rather be described as “producing more fissile material than was present within the input”. It is the energy potential of a fertile isotope (Th2 or U8), a kind of “fission-proto-fuel”, which is then exploited following an appropriate transmutation into either U3 or Pu9.

The U8 resource appears almost as inexhaustible as the Th2 resource (a factor 2, or 4 less does not really modify the issue, given the geology-related uncertainties). In addition, over the years, the U8 resource has acquired a significant advantage: it does not have to be mined anymore. It is already on the shelves in large quantities at least in countries which have a nuclear enrichment industry and its commercial value is zero, if not negative. As a matter of fact U8 is sometimes considered as a sort of “waste” extracted from natural uranium to obtain the U5-enriched fuel for standard reactors. Indeed for each U8 nucleus still kept in the fuel of a GII or GIII reactor, about four U8 nuclei have been removed from natural uranium and stored away. As an illustration, presently, the stock of U8 stored in France corresponds to about one thousand year of this country’s present energy production in fast reactors. Note that the former French rare-earth chemical industry has also left on the shelf a quantity of Th2 amounting to about 100 years of nuclear energy production in a MSFR. The “nuclear-fertile” resource, Th2 as well as U8, is plentiful.

In fact if there is to be a resource shortage preventing a future GIV breeder-reactor generation to replace the reactors of GII and GIII generations, it will certainly not be one of fertile isotopes (Th2 or U8) but rather a shortage of fissile elements and more specifically one of Pu9. It also appears that only those countries which have exploited PWR or BWR reactors for a long time will have produced enough Pu9 within the burnt fuel of their GII and GIII reactors to be in position to start reactors of the GIV generation at a significant level.
Myth 2: the waste of Th fuelled reactor waste is less dangerous.

There are few points to keep in mind when one discusses nuclear waste:

1) How to define nuclear waste is not simple when breeding and recycling is involved (always the case with Th). Indeed, the only unambiguous waste produced by a nuclear reactor consists of the fission products. All elements, Th, U, Pu or other isotopes generally classified as “minor actinides” present in the burnt-fuel when it is discharged from the reactor vessel still have potentially an energetic value if they can be made to fission. It is thus a matter of technical, safety and political decisions to consider whether they belong in the waste or whether they are a fuel to be recycled in the next stage of the operation of the nuclear system.

2) To illustrate this last sentence we can, for instance, consider how most countries today define the nuclear waste resulting from the operation of their GII or GIII reactors. These countries have opted for the “open-cycle” or “once-through” strategy: there is no reprocessing; the fuel pins and their casing discharged from the reactor are considered to be a waste and destined to an ultimate repository. A typical composition of the burnt fuel of a GII reactor is: fission products 5 %, fissile isotopes 1.5 %, and fertile isotopes 93.5 %. Thus 95 % of what is today defined as a nuclear waste has, “fissionwise”, an energetic potential. One can also note that the ratio of the mass fissile output (U5 plus Pu9) over that of the mass fissile input (only U5) is close to 40 %. The choice has thus been made to send to the waste a significant amount of fissile isotopes which on the other hand are known to be necessary to start any reactor and also cost energy to produce via enrichment of natural uranium. In a sense, the corresponding waste underground repository can also be caricatured as a “man-made plutonium mine”.

3) In a comparison of the thorium-cycle versus the uranium-cycle, the radiotoxicity of the fission fragment waste which dominates the total radiotoxicity for the first centuries can be set aside. It is roughly the same for both cycles. Thus any difference between the two cycles will only be visible after a few centuries have passed, say 500 years.

4) How to define the danger associated with a waste which has been sent to a permanent underground storage is also a matter of discussion. One can consider the total radiotoxicity of what is being stored. This is the radiotoxicity that, for instance, would be encountered by somebody, not too expert in questions of geology, searching for oil at the wrong place, who drills right into the underground nuclear waste repository. After a few centuries, this radiotoxicity is dominated by the actinide content of the waste. Since it is not the same for the two cycles, we shall return to that point later. On the other hand one may consider the small radiotoxicity – often smaller than natural radiotoxicity – which after many millenniums escapes to the surface through the geological barrier (the repository is typically few hundred meters below the Earth surface). Since the mobility of actinides in the ground is very small, the very-long-term escaping radiotoxicity will mostly correspond to some long lived isotopes of very mobile fission-fragment elements (for instance Zr or I). Here again there won’t be much difference between the thorium and uranium cycle. For this reason, from then on, I will only discuss the radiotoxicity associated with the actinide elements, namely that which would be met by somebody who breaks into an underground nuclear waste storage.

5) Full recycling means that all the actinides coming out at one stage are reinserted into the fuel of the next stage. Thus the large radiotoxicity within the burnt fuel does not vanish; it is just made to move around circularly from the reactors to the separation cells to the fuel production factories and back to the reactors within the diverse nuclear-industry components. On the other hand, if the recycling process (whether for the U-cycle or Th-cycle) is perfect there won’t be any radiotoxic waste stream other than that of the fission products.

6) After each pass through the reactor, recycling implies that a chemical separation is performed on the burnt fuel. Because no chemical process is perfect, the stream of actinides effectively going out to the waste and determining its middle-to-long term radiotoxicity (short term is governed by fission fragments) depends on the efficiency of the chemical separation techniques. For the uranium-cycle, efficiencies above 99.9 % have been demonstrated. The corresponding figures for the thorium cycle are not known.

7) Full recycling is also not currently envisaged for the U-Pu SFR technology. It is generally considered that only plutonium will be recycled while minor actinides such as americium (Am) and most certainly curium (Cm) will be sent to the waste. This strategy along with the efficiency of the chemical separation determines the time evolution of the radiotoxicity of the waste stream of SFR reactors. It is several orders of magnitude below that of the burnt-fuel stream of today’s reactors. I will come to that point later because I believe it gives a misleading image of the benefit of recycling as concerns waste reduction (see 10 below).

8) Assuming that the not-yet-known chemical separation efficiencies for the thorium cycle are the same as those already demonstrated for the uranium cycle, it can be shown that the radiotoxicity of the actinide waste stream of a MSFR reactor when it is working in its “asymptotic” regime, that is a system relying exclusively on the Th2-U3 cycle, is lower by at least an order of magnitude than that of a SFR reactor working exclusively within the U8-Pu9 cycle and sending americium and curium to the waste. However, this good point should also be taken with a grain of salt.

9) Indeed, while following a long period of production with GII and GIII reactors, one can envision extracting from their stored burnt fuel all the Pu9 necessary to start immediately an “asymptotic” SFR reactor. This is not possible for a MSFR. The U3 resource does not exist. No “asymptotic” MSFR reactor can be started today. The first MSFR reactors will have to use either U5 or Pu9 – or some other isotopes of higher elements – to be started. Via transmutation they will therefore produce the same undesirable elements (Am and Cu) and thus the same kind of waste as SFRs. It will take almost a century before a MSFR breeds enough U3 to avoid tapping into the U5 and Pu9 resource and thus become “asymptotic”.

10) Finally, many presentations on the actinide waste generated by fast reactors implicitly assume that mankind will rely on them forever. In other words, these presentations only consider the radiotoxicity of the waste stream which leaves the chemical reprocessing factories while electricity is still being produced by reactors. In such a case, the radiotoxic gain over the present situation (the nuclear burnt fuel is disposed without reprocessing into the long term repository as it leaves the reactor) is indeed large (several orders of magnitude). On the other hand, if one day, fusion becomes an economically viable option or if there is a major breakthrough on the energy storage question rescuing renewable intermittent energies, humans may decide to stop producing electricity via nuclear fission. At that point, all the radiotoxic isotopes present within the system – reactors, chemical and fuel fabrication factories – become a waste that must be added to the stream of the earlier electricity production period. If one assumes for instance that it will take 200 years of operation of breeder reactors before one reaches this “end of the game” situation, one finds that it is the addition of this “in-cycle” radiotoxicity which mostly determines the radiotoxic evolution of the waste during the following millenniums. Then, the radiotoxicity of the total MSFR waste will only be slightly lower than that of a SFR. There will also still be a small gain over the present strategy in which only GII and GIII reactors are used and their burnt-fuel is disposed without reprocessing. Typically we are talking here of decreases by one order of magnitude if everything in the complicated recycling scheme works optimally.

11) It is doubtful that such a small gain can suppress the opposition to the usage of nuclear energy of somebody whose main concern is the very-long-term radiotoxicity of the waste. It will also not enable societies to find a stable and long-term safe waste management solution if only for containing the radiotoxicity of the fission fragments. One can say that the nuclear waste issue – and the need for underground repositories – is not going to be removed by SFRs or by MSFRs. At most, it will be alleviated, which certainly is a plus. In addition, one may note that at least, it will be up to the countries which have benefitted from the associated electricity production to solve their own nuclear waste problem. This appears more ethically defendable than the fossil-fuel-powered electricity production in which the CO2 emitted by the beneficiaries of the electricity is graciously “offered” to the rest of the world.

12) To conclude this section on nuclear waste, one should not forget that volume, chemical properties and short-term heat production also play an important role when it comes to designing a repository.

Myth 3: The thorium cycle will eliminate the nuclear proliferation issue

A bomb needs fissile material. Neither U8 nor Th2 are good materials for making bombs. It is certainly the case that no bomb has been produced using U3 since there is no U3 available. Whether that will still be the case when U3 becomes plentiful and is routinely handled in reprocessing units is certainly not clear to me. Discussions about larger or smaller critical masses are essentially irrelevant here. In addition, as was discussed above, for a very long time a MSFR will be using U5 or Pu9, which means that the possibility of diverting these isotopes for a dangerous purpose will remain.

I believe one should not count on the physics (Th vs U) or the technology (MSFR vs SFR) to stop humans from doing foolish things. Non-proliferation has certainly technical aspects which require nuclear expertise to be present and heard in international discussions but, for me, proliferation is mostly a political issue.

Problems still to be solved problems for a molten salt thorium fuelled reactor

Here, I list some of the problems still faced by the MSFR technology.

Problem 1: Design and Material science.

The associated questions concern the salt, the vessel and the heat exchanger.

  • In a MSFR, the salt acts both as a heat carrier and nuclear fuel carrier. It also has some moderating (i.e. slowing down neutrons) effect which precludes for instance its usage for breeding with the uranium cycle. The salt must stay stable within a wide range of rather high temperatures (typically from 500°C to 800°C). A family of fluorine based salts is presently being considered. These salts should resist the high neutron fluxes within the vessels (their chemical structure must remain intact and their elements should not suffer transmutation). They should dissolve the actinides of the fuel at the required concentrations and keep them dissolved all along the circuits of the reactor (vessel, heat exchanger and connecting pipes) in variable temperature and fluid velocity conditions so as not to create unwanted deposits of nuclear material.
  • The material for the vessel and the heat exchanger (Ni-based alloys are being considered) should resist both mechanical and chemical corrosion by the salts on the inside surface and oxygen corrosion at high temperature on the external surface.
  • Salt is not as good a heat carrier as liquid metals. The design of heat exchangers capable of rapidly (typically less than 10s) extracting heat while resisting the mechanical corrosion by a fast moving salt is still a challenge.
  • The demonstration that valves and pumps capable of working reliably for many years with such salts under the planned temperature and fluid velocity conditions is not yet done.
  • The most harmful fission fragments that can poison the reactor must be eliminated on-line via the helium bubbling technique. This has not yet been demonstrated in situations close to those that will exist in a future MSFR. The material for the vessel and the heat exchanger (Ni-based alloys are being considered) should resist both mechanical and chemical corrosion by the salts on the inside surface and oxygen corrosion at high temperature on the external surface.

Each of these points should reach a status such as to receive an agreement from the safety authorities.

Problem 2: Chemistry of the combined uranium and thorium cycles

A thorium-fuelled molten-salt reactor has to be coupled to a highly efficient chemical unit to reprocess the fuel and the salt. The element-separation efficiencies should be as high as those which have already been reached at units designed for reprocessing within the uranium cycle. Presently, the scientific knowledge and technological knowhow needed to build a working prototype of a Th-cycle reprocessing unit with such performances does not exist.

Two reasons for this situation can be advanced. First the amount of man-year work on the thorium cycle is minuscule compared to that already spent for the uranium cycle. Second, the chemistry is different. First, some oxidation-potential properties of Th are not as favourable as those of U. Second, since U3 is not available and because the U5 resource is limited, one generally presents the first MSFRs as “nuclear waste burners” which will use (and destroy) the plutonium of the waste of GII and GIII reactors (some even mention higher actinides) as their initiating fissile isotope. This makes the chemistry more complicated since it must be able to handle simultaneously elements belonging to the Th and U cycles.

Problem 3: Design of a global strategy for safety

The general philosophy underlying the safety scheme of today’s reactors was elaborated over many years. It relies on the so-called “in depth defence” which requires the existence within the reactor of three barriers which have to be breached before some radiotoxic material is released to the outside world. Typically, in GII and GIII reactors, they correspond in succession to the metallic envelope of the fuel pins, the boundary of the primary circuit (vessel, primary heat exchanger) and finally the reactor building.

Even when reprocessing is performed (the situation in France) so that other sets of safety regulations have to be defined (approved and enforced) for the chemical separation unit, and the fuel-pin fabrication factory, this does not affect the general safety programme for the reactor itself. Indeed, there is still a clear physical separation between these three components of the global nuclear system. This means that the safety scheme as it exists today for GII and GIII reactors and is understood jointly by the designers of reactors, the electric utilities operators and the members of safety authorities can also be applied to SFRs. These three groups of experts may certainly argue over the implementation of the various safety items and their performance levels but at least they agree on the goals and they share a common safety language.

Nothing of the sort exists for the MSFR in which at least one barrier is a priori missing (the metallic envelope of the fuel) and which combines on the same site the reactor and the chemical reprocessing unit whose activities directly affect each other. It is my guess that no significant work has been done to define a safety scheme for molten salt reactors since the MSBR was abandoned in the first half of the eighties. What competence on this subject existed at that time is probably either obsolete today in view of the steady reinforcement of nuclear safety or simply lost. This competence has to be rebuilt, something which today appears rather problematic.


In my opinion, Thorium and molten salt reactors technologies belong definitely in the domain of research. They certainly have a potential which deserves scientific and technical investigation. On the other hand, given the present situation of the nuclear energy research institutes of the western world and the general decline in their enrolment of high-quality well-trained young engineers, it is improbable that much work will be invested into such an innovative, far-reaching but also risky option. Therefore if nuclear energy is to provide a significant contribution to the world energy mix of the 21st century, it is doubtful that thorium and molten salt technologies will be ready in time to take part.


[1] The adjective « thermal » refers here to the average kinetic energy of neutrons as they impact the heavy nuclei in the nuclear fuel. They are close to 1/40 eV (or 273°K or 0°C). On the contrary, in a “fast” reactor, the fission-neutrons are not slowed down by water so that their kinetic energy remains in an MeV range that is a factor of 107 above that in a thermal reactor. Without getting into nuclear physics details, it suffices to say here that only fast neutrons allow efficient breeding at least for the uranium cycle. The situation is different for the thorium cycle which can breed over a wide range of neutron kinetic energies, albeit less efficiently that in a fast U-Pu reactor.

Short Bio for Hubert Flocard

A former student of the Ecole Normale Supérieure (St Cloud) Hubert Flocard is a retired director of research at the French basic science institute CNRS. He worked mostly in the theory of Fermi liquids with a special emphasis on nuclear physics. He has taught at the French Ecole Polytechnique and at the Paris University at Orsay. He was for several years a visiting fellow of the Lawrence Berkeley Laboratory and he spent a year as visiting professor at the theory department of MIT (Cambridge). He has worked as an editor for the journals Physical Review C and Review of Modern Physics (APS, USA) and Reports on Progress in Physics (IoP, UK). He has chaired the nuclear physics scientific committee INTC at CERN (Switzerland). When the French parliament asked CNRS to get involved in research on civilian nuclear energy, he was charged to set up and to manage the corresponding CNRS interdisciplinary programme. He still acts as a referee to evaluate research projects submitted to Euratom.

EDITORS NOTE: This column originally appeared on the Energy Matters website. The featured image is of the inner working of a Molten Salt Fast Reactor.

Government Ruins the Dishwasher (Again) by Jeffrey A. Tucker

The regulatory assault on the dishwasher dates back at least a decade. For the most part, industry has gone along, perhaps grudgingly but also with a confidence that dishwashers would survive. Surely government rules wouldn’t finally make them useless.

But the latest regulatory push by the Department of Energy might have finally gone too far. The DoE says that loads of dishes can’t use more than 3.1 gallons. This amounts to a further intensification of “green” policies that are really just strategies to wreck the consumer experience.

The agency estimated that this would “save” 240 billion gallons of water over three decades. It would reduce energy consumption by 12 percent. It would save consumers $2 billion in utility bills.

But as with all such estimates, these projections have three critical problems.

First, saving money and resources is not always an absolute blessing if you have to give up the service for which the resources are used. Giving up indoor plumbing would certainly save water, just as banning the light bulb would save electricity. The purpose of resources is to use them to make our lives better.

Second, the price system is a far better guide to rational resource use than bureaucratic diktat. If the supply of water or electricity contracts, prices go up and consumers can make their own choices about how to respond. This is true with one proviso: There has to be a functioning market. This is not always true with public utilities.

Third, the bureaucrats rarely consider the possibility that people will respond to rationing by using resources in a different way. A low-flow toilet causes people to flush two and three times, a low-flow showerhead prompts people to take longer showers, and so on, with the end result of even more resource use.

What does breaking the dishwasher accomplish? It drives us back to filling sinks or just running water over dishes for 10 minutes until they are all clean, resulting in vastly more water use.

The Association of Home Appliance Manufacturers, which has quietly gone along with this nonsense all these years, has finally said no.

“At some point, they’re trying to squeeze blood from a stone that just doesn’t have any blood left in it,” said Rob McAver, the lead lobbyist.

The Association demonstrated to the regulators that the new standards do not clean the dishes. They further pointed out that this can only lead to more hand washing. The DoE now says it is revisiting the new standards to find a better solution.

All of this is rather preposterous, since dishwashers are already performing at a far lower level than they did decades ago. Even when I was growing up, they were getting better, not worse. You could put dirty dishes in, even with stuck-on egg and noodles, and they would come out perfectly clean.

I started noticing the change about five years ago. It was like one day to the next that the dishes started coming out with a gross-me-out film on the glasses. I thought it was my machine. So I bought a new one. The new one was even worse, and it broken within a year. Little by little, I started hand washing dishes first, just to make sure they are clean.

It turns out that this was happening all over the country. NPR actually discerned this trend and did a story about it. The actual source of the problem was not the machine or the user, but something that everyone had taken for granted for generations: the soap itself.

The issue here is phosphorous. The role of phosphorus in soap is critically important. It is not a cleaning agent itself but a natural chemical that unsticks the soap from fabrics and surfaces generally. You can easily see how this works by adding phosphorus to a sink full of suds. It attacks the soap and causes it to bundle up in tighter and heavier units, taking oil and dirt with it and pulling it down the drain. It is the thing that extracts the soap, making sure that it leaves surfaces.

Painters know that they absolutely must use phosphorous to prepare surfaces for painting. If they do not, they will be painting on a dirty, oily surface. This is why the only phosphorus you can now find at the hardware store is in the paint department (sold as Trisodium Phosphate). Otherwise, it is gone from all detergents that you use on clothes and dishes, which is a major reason why both fabrics and dishes are no longer as clean as they once were.

Why the war on phosphorous? It is also a fertilizer. When too much of it is dumped into rivers and lakes, algae growth takes over and kills off fish. The bulk of this comes from large-scale industrial farms in specific locations around the country. Regulators, however, took on the easy target of domestic soaps, and manufacturers faced pressure to remove it from their soaps.

Now it is impossible to get laundry or dish soap with phosphorous as part of the mix. If you want clean, you have to physically add your own by purchasing trisodium phosphate in the paint department and adding it to the mixture by hand.

Welcome to regulated America, where once fabulous consumer inventions like refrigerators, freezers, washing machines, and dishwashers have been reduced to a barely functioning state. The reasons are always the same: 1) phosphorous-free detergent, 2) a fetish with saving water, 3) weaker motors that use less electricity, 4) more tepid water due to low default settings on hot water heaters, and 5) reduced water pressure in general.

Put it all together and you have an array of products that no longer function in ways that make our lives better. There is an element of dystopia about this, especially given that these household appliances were first invented and widely deployed in postwar America. This was the country where women, in particular, first started to enjoy the “freedom from drudgery.” It was machines as much as ideology that began to enable women to cultivate professional lives outside the home.

No, we are not going to be forced back to washboards by the river anytime soon. But suddenly, the prospect of having to hand wash our dishes does indeed seem real. If the regulators really do get their way, functioning dishwashers could become like high-flow toilets: contraband to be snuck across borders and sold at a high black market prices.

It seems that the regulators can’t think of much to do these days besides ruining things we love.

Jeffrey A. Tucker

Jeffrey Tucker is Director of Digital Development at FEE, CLO of the startup, and editor at Laissez Faire Books. Author of five books, he speaks at FEE summer seminars and other events. His latest book is Bit by Bit: How P2P Is Freeing the World. Follow on Twitter and Like on Facebook.

Fossil Fuels and Mankind by Euan Mearns

It has become popular to demonise fossil fuels (FF). Pop stars, press, politicians and now Pontiffs speak with a single voice:

We know that technology based on the use of highly polluting fossil fuels – especially coal, but also oil and, to a lesser degree, gas needs to be progressively replaced without delay. Until greater progress is made in developing widely accessible sources of renewable energy, it is legitimate to choose the lesser of two evils or to find short-term solutions. But the international community has still not reached adequate agreements about the responsibility for paying the costs of this energy transition.


In this post I want to take a brief look at what FF have done for humanity and the environment. I will argue that in the 19th Century, FF first of all saved the whales from extinction and then through averting whole sale deforestation of the planet’s surface FF saved multiple ecosystems from destruction and as a consequence averted the extinction of thousands of species.

Figure 1.

Figure 1 Population growth (blue line), right hand scale. Fossil fuel consumption (million tonnes oil equivalent) left hand scale. The exponential growth in population would not have been possible without FF. We all therefore owe the fabric of our society and our very existence to the use of FF over the past century or more. 

Energy and Man

Every human being on Earth requires energy to survive (see list on Figure 1). Be it a handful of rice for the poorest Bangladeshi or the excesses of suburban life in the West, everything we do requires energy and in 2014 86% of that energy came from FF and 11% from legacy hydro and nuclear plant. Only 3% came from alternative sources. Worryingly, in a step back towards 19th century squalor, much of that 3% came from felling and burning forests.

Figure 2.

Figure 2 This chart shows per capita productivity (a proxy for income) on the Y-axis and per capita energy consumption on the X-axis. The data for each country represent a time series starting in 1970 and normally progressing with time towards greater income and energy consumption. It is plain to see that there is great disparity in the per capita income and per capita energy consumption between countries. As a general rule, developing countries are striving to become wealthy like the OECD and hence show year on year growth in income AND energy consumption. See for example China, Turkey, Brazil and Belarus. To become more wealthy and more prosperous, in the common sense, requires us to use more energy.

It is simple and simplistic to make the argument that there should be a more equitable distribution of wealth and energy consumption. It is certainly rational to propose the reduction of waste and improved energy efficiency in the west. But competition and survival of the fittest is in our genes and makes us who we are. And there are certain benefits that flow from the wealthy to the poor, inoculation against deadly infectious diseases to name but one.

I am not arguing here in favour of greater polarisation of wealth but merely making the observation that it is a natural consequence of the socio economic models that appear to have served us well. I would warn against the growing politics of envy.

To become wealthy, the poor need access to clean drinking water, sanitation, food, and housing. All this requires energy and natural resources.  The simplest and most economic way to provide this is through coal or gas fired power stations and the construction of electricity grids. To deny the poor access to FF is to condemn them to poverty for ever. It is fantasy to believe that the poor can be made wealthy (in the sense that the OECD is wealthy) by deployment of expensive and intermittent renewable energy. Like us, they may become wealthy only from using cheap, reliable and predictable energy supplies. This is not to say that there is no place for niche deployment of renewable energy in some developing countries.

Saving the Whales

During the 19th Century, global population doubled from approximately 0.8 to 1.6 billion (Figure 1). Throughout Europe and N America this coincided with a process of industrialisation, urbanisation and war. Resource consumption was on the rise and as we shall see in the following section forest timber was a key source of building material and fuel. But neither timber nor coal (at that time) could provide the light required in the cities that were being built and it is this niche that was filled by whale oil.

The production of whale oil grew exponentially from 1815 to 1845 and thereafter declined  following a classic “Hubbert curve” (Figure 3). At the same time we know that whales were almost hunted to extinction and this is often held up as an example of over exploitation of a finite resource. Post-peak whale oil production saw prices rise and become volatile suggesting a continued demand for whale oil that could not be met by supply. But the market situation is made more complex by the fact that just in the nick of time for whales, rock oil was discovered in Pennsylvania in the 1850s. It was found that rock oil could be distilled into a number of fractions and that one of those, kerosene, was ideal as lamp oil.

Figure 3.

Figure 3 The production of whale oil in the 19th Century follows a classic Hubbert curve with production dwindling as the stock of whales in the oceans was depleted. Chart source Ugo Bardi.

This represents one of the great energy substitutions of human society. It was to be short-lived since electric lighting would soon take over from kerosene where the electricity was provided by combusting coal. Note that I use the term substitution and not transition since there was a direct substitution of one energy source for the other and whales ceased to be a part of Man’s energy supply mix. Without the discovery and use of rock oil it seems likely that whales would have become extinct in the 19th Century.

Saving The Forests

Prior to the mid nineteenth Century the main fuel source used by Man was forest wood (Figure 4). Wood (biomass) continues to be an important fuel today throughout the developing world.

Figure 4.

Figure 4 The development of Man’s energy supplies has seen the sequential addition of coal, oil, gas, hydro and nuclear to the energy mix. In discussing energy transition, it is wrong to assume that a new energy source replaces what went before. The main pattern is one of addition, not substitution or replacement. Data from Vaclav Smil and BP as compiled by Rembrandt Koppelaar.

Population growth and progressive industrialisation throughout Europe led to wholesale deforestation of the Continent (Figure 5). And then in the mid-nineteenth Century we learned how to burn and mine coal on a grand scale powering the industrial revolution. We can but speculate what might have occurred had this not happened. It seems likely that Europeans would have spread themselves around the globe plundering resources on an even grander scale than took place at that time.

Figure 5.

Figure 5 Data on deforestation is hard to find. This slide from a surprisingly interesting presentation by Sir Mark Walport shows the impact of 2500 years of felling trees in Europe. It was to a large extent the quest for natural resources that sent Europeans around the World in the centuries that followed and that sent Adolf Hitler East in 1941. Our current system of international trade and financial deficits may be imperfect but it seems preferable to the system of plunder that it replaced.

What did happen is that we learned to use coal, then oil and natural gas and ultimately nuclear power. Harnessing the power of fossil fuels provided Man with energy slaves to do work on our behalf. It led directly to the progressive development of the highly sophisticated society we live in today where, life expectancy, health and comfort far exceed levels of 100 years ago for billions of souls. It allowed us to achieve this whilst largely abolishing slavery and ending our dependency on forest wood as a fuel.

When FF runs scarce in a country this can cause great harm to the environment as we saw in Indonesia in 2003. Indonesia was once a member of OPEC and exported oil. But owing to population growth, increased prosperity and then a down turn in oil production, Indonesia found itself facing oil imports. Donning a Green cloak, Indonesia turned to biofuels in the form of palm nut oil, and set about burning virgin rain forest and orang-utans to make way for the plantations.

Those who fail to see the staggering benefits brought to Man through using FF are blinded by dogma. Those who argue that FF should be phased out are making an argument to end prosperity for all.

The Population Paradox

Whilst I argue here, and many others have argued before me, that FF has enabled the human race to flourish, we have been so successful in doing so that over 7 billion souls on planet Earth is now viewed by many as the greatest threat to our continued existence. It is certainly true that there are a multitude of problems that are not evenly distributed about the Earth. These include water shortages, food shortages and malnutrition, air and water pollution, deforestation, social and civil unrest, spreading conflict, displaced persons, infectious diseases and their spread. These are all problems caused by too many people combined with inadequate social, political and economic structures to deal with a rapidly changing world. While certain aspects of air pollution in China and plastics pollution of ocean gyres may be attributed directly to FF, by and large FF are the solution to these problems, not their cause, for example creating clean water supplies and sanitation requires energy as does food production. It is a lack of energy and other resources that lies at the heart of many of the major issues that cause real hardship around the world. It is therefore a mark of extraordinary ignorance and stupidity to believe that withholding these resources may lead to solutions.

The problem of course is that we have become too successful at resolving these issues for many and that inevitably leads to more, not less people and a compounding of the very problems that we are attempting to resolve. Population controls are a subject ducked by virtually all OECD political leaders and organisations. Over population and poverty lies at the heart of many of the major issues confronting humanity and yet no one is prepared to confront this issue. It is certainly an extremely difficult issue to confront and not easily solved.

My own view is that natural evolutionary forces will see global population peak this century followed by decline. That is what the UN central forecast shows. This may happen via the spread of prosperity in some parts and by the spread of deprivation, disease, hunger and war in others. But what is widely viewed as a population problem, will resolve itself in response to various pressures.  A falling global population will present a whole new set of problems for humanity that we will address when the time comes. There will be a growing acceptance that economic growth, welfare, free healthcare and pensions were all temporary aberrations made possible by abundant and cheap FF. As those resources run scarce this century humanity will struggle to maintain the living standards of the past. There is no need to artificially create a major trauma for humanity today by forced withdrawal from the FF era upon which virtually all of our prosperity is based.

An argument can be made for leaving some FF for future generations but that is not the argument being made by Green anti-capitalists.

Past Energy Transitions

Finally, a quick note about past energy transitions as illustrated in Figure 4. Let me repeat what Pope Francis had to say:

We know that technology based on the use of highly polluting fossil fuels – especially coal, but also oil and, to a lesser degree, gas needs to beprogressively replaced without delay.

The first key observation from Figure 4 is that energy transition is via addition not substitution. In 150 years we have not replaced any of our major sources of energy with another at the system level. At the smaller scale oil fired power generation may have been replaced by coal and then by natural gas, but that merely freed up some oil or coal for use elsewhere. The second key observation is that “energy transition” has normally followed thermodynamic and economic laws where the new offered advantages over the old. It is therefore in my opinion sheer folly to believe and to propose that FF based technolgies can be replaced en-mass by much inferior, environment wrecking, more expensive renewable energy flows.

Figure 6 Millions visit the gold-plated Vatican every year, arriving in jet aircraft from all over the world, consuming vast amounts of oil and according to Pope Francis creating risks to the stability of Earth’s atmosphere.


Certain readers may read my bio and then seek to make scurrilous claims that I am somehow wedded to and supported by the FF industries. This is not true. I do however have holdings in certain oil companies and I do object to Green pressure groups trying to talk down the price of energy companies in general. My analysis and opinions are based upon my understanding of thermodynamics, economics and human society. Comments will be heavily moderated. I cannot lay claim to the truth. And so if anyone can demonstrate in a quantitative way how we can migrate away from FF to alternatives with anet benefit to society then please make your case.

I made my alternative energy plan some time ago:

Energy Matters’ 2050 pathway for the UK

EDITORS NOTE: This column originally appeared on Energy Matters. The featured image is of Prometheus best known as the deity in Greek mythology who was the creator of mankind and its greatest benefactor, who gifted mankind with fire stolen from Mount Olympus.

Electricity from New Wind Three Times More Costly than Existing Coal

WASHINGTON – The Institute for Energy Research released a first-of-its-kind study calculating the levelized cost of electricity from existing generation sources. Our study shows that on average, electricity from new wind resources is nearly four times more expensive than from existing nuclear and nearly three times more expensive than from existing coal. These are dramatic increases in the cost of generating electricity. This means that the premature closures of existing plants will unavoidably increase electricity rates for American families.

Almost all measures of the cost of electricity only assess building new plants–until now. Using data from the Energy Information Administration and the Federal Energy Regulatory Commission, we offer useful comparison between existing plants and new plants.

America’s electricity generation landscape is rapidly changing. Federal and state policies threaten to shutter more than 111 GW of existing coal and nuclear generation, while large amounts of renewables, such as wind, are forced on the grid. To understand the impacts of these policies, it is critical to understand the cost difference between existing and new sources of generation.

The following chart shows the sharp contrast in the cost of electricity from existing sources vs. new sources:

LCOE press png

Click here to view the full study.

This study was conducted by Tom Stacy, a former member of the ASME Energy Policy Committee, and George Taylor, PhD, the director of Palmetto Energy Research. The source of the calculations used in this study is a compilation of data reported by the generators themselves to FERC and EIA.

Israeli Populist Protest Against Offshore Gas Development Deal Misguided

Last week, Israeli PM Netanyahu effectively declared offshore gas deal with Delek Partners and US Noble Energy, Inc. a national security issue. This was the conclusion reached after discussions with the development partners and economic analysis of other major gas developments resulting in a proposed framework to replace a series of bust deals with the Israel Antitrust Authority. He and his Energy Minister Yuval Steinitz may have a daunting task ahead next week contending with coalition partner, Economics Minister Aryeh Deri of Shas and Populist/Green opponents of the new deal. They support the position of the outgoing General Director of the Israel Antitrust Authority, Dr. David Gilo who resigned on May 26th objecting to the new deal saying he would not leave until August 2015. We have written about the offshore gas developments in several New English Review (NER) articles andIconoclast posts.  See: “Could Israel Lose the Energy Prize in the Eastern Mediterranean” NER (Jan. 2015). We specifically pointed out the radical populist actions by Dr.  David Gilo, who didn’t appear to have the requisite understanding of   energy market dynamics, let alone geo-political realities, or the risk capital requirements to develop and distribute gas.

Last December Gilo reneged on a March 2014 comprise deal with the Delek-Noble development partners instead accusing them of being a duopoly operating  in restraint of trade. Instead he sent the development partners a consent decree forcing sale of interests in the offshore gas deals for which they provided the risk capital to bring to develop them. Thus began the unraveling of a potentially important development of significant natural gas reservoirs in Israel’s offshore Exclusive Economic Zone in the Eastern Mediterranean Levant Basin.  Delek and  Houston based Noble Energy  had spent  over $6 billion before bringing  in the  9  trillion cubic feet tcf Tamir field in 2009 and  the 21 tcf  Leviathan field in 2011. Delivery of gas from the Tamar field began in 2013, while the significant larger Leviathan field might be brought on stream in 2018.  When the Knesset adopted revised   royalty and tax scheme proposed by the Sheshinski Committee in 2013, Israel looked like it might be on the path to a bright economic future.  That included the possibility of earning upwards of $70 billion in future revenues funding an authorized Sovereign Wealth Fund. The tax revenues from the gas sold for domestic use and export would substantively alleviate social program and national security budgetary burdens. That was also evident to former Reagan National Security aide, Prof. Norman Bailey of Haifa U and, Caroline Glick, deputy managing editor of the Jerusalem Post in an op-ed published on Thursday, July 2, 2015, Israel’s  Populist Energy Crisis.  

Saturday night, July Fourth, the Jerusalem Post reported, thousands from the student Green Course movement protesting the new gas deal from Netanyahu in Tel Aviv’s Rabin Square, Jerusalem, Beersheba and at the PM’s home in Caesarea.  According to the Post, “The activists demanded lower gas prices and increased use of gas in domestic factories, accusing the government of bending to foreign interests.”

The new proposal that Netanyahu is poised to secure cabinet approval on Monday, July 6th had the following terms according to the Post:

Under the government’s gas outline, Delek subsidiaries Delek Drilling and Avner Oil Exploration would have to exit the, Tamar reservoir, whose gas began flowing to Israel in March 2013, selling their assets there within six years.

Houston-based Noble Energy could remain the basin’s operator, needing to dilute its ownership from the current 36 percent share to 25 percent within the same time frame.

The Delek subsidiaries and Noble Energy would be required to sell their holdings in two much smaller offshore reservoirs, Karish and Tanin, within 14 months. Because the buyer would be required to sell gas only to Israel, export allocations intended for these reservoirs would be transferred to Leviathan, according to the outline.

In 2013, the cabinet decided to cap exports at 40% of production, and pipelines designated for export will not be entitled to tax benefits guaranteed to local pipelines, as mandated by the Sheshinski Committee, whose recommendations on hydrocarbon taxation became law in 2013.

Glick in her Post op Ed  suggested that the hit that Israel had taken in foreign direct investment had a lot to do with misguided populist economic doctrine that pervades the Zionist Union, Yesh Atid, some coalition partners and Knesset opposition.  From my own investment banking exposure in Israel these populist economic views are a reflection of the founding Labor Socialist parties and the Histadrut. The latter owns enterprises that have never been effectively privatized.  It is also a poor reflection on a country that prides itself on the law, that doesn’t extend to honoring contractual obligations. She argues that is reflected in downward trends in Foreign Direct Investment cited in the most recent UN Council on Trade and Development report:

In 2014 Foreign Direct Investment in Israel was 46 percent below levels in 2013, dropping from $11.8 billion to $6.4bn. During the same period worldwide direct foreign investment dropped a mere 16%, meaning the drop in investment in Israel was nearly three times the global average.

Israel also had demonstrated that it was okay for foreign partners like Noble Energy to invest billions in offshore energy development, just as long if it came through, that the terms could change denying appropriate returns to risk investors.  Moreover, the hue and cry in Israel that the duopoly of the Delek –Noble gas partnership could result in price gouging was false.   When in fact since the Tamir field came on stream average gas prices dropped in Israel resulting in both lower energy and manufacturing costs.

The Israel Noble Energy manager Binyamin Zomer reinforced Glick’s observations with these comments cited in Globes Israel Business:

Let’s make it clear. We didn’t break the law, and we didn’t prevent competition. What we did do was to succeed beyond the expectations of the government that invited us to invest in Israel. Israel was happy, it seems, for Noble Energy to risk its money in Israel, as long as it was unsuccessful. There is a monopoly – that’s not a crime. Let’s understand why this happened. The company agreed to invest its money where other companies refused (and we won’t apologize for that); the supply of gas from Egypt ended in 2011 (and that was not our fault); other companies with no experience found no gas (again, not our fault); and the incessant interference by regulators with no background in oil and gas drove every gas company away, except for Noble Energy.

Glick offered the following proposals to rectify the impasse:

If we are to correct the damage – to our energy market specifically and to the Israeli economy overall – there is only one path to take. The Knesset must abrogate the 2011 windfall profits law and end all attempts to define the Delek-Noble partnership as a monopoly while seeking new, creative ways to seize their profits.

Then, the Knesset must pass a law that will protect investors from attempts to retroactively change the terms of operating licenses they receive from the State of Israel.

Israel has enough problems with the anti-Semitic boycott movement that is growing by leaps and bounds. We need to curb our populist tendencies and stop making those who want to invest in Israel feel that they are fools to do so.

As the late Hollywood and radio personality of my youth Bill Bendix might opine, “this is a rotten development.” Israel’s obsessive democracy  makes  the country  prone  to  divisive squabbles and in this case  possibly resulting in losing  a glittering economic future.  This latest Knesset speed bump doesn’t bode well  for Israel  achieving first world economic preeminence.  As we have written innumerable times these Israeli populists are economically uniformed  genetic socialists who have no understanding of both geo-political resource realities and commodity market dynamics or the risk reward relationships undergirding energy development. We blame Dr. Gilo whose dictatorial arrogance reneging the original compromise deal with both Delek Group and Noble Energy was nothing but political grandstanding . He was awaiting the victory of Zionist Union and populist parties like Yesh Atid that didn’t occur on March 17, 2015.  He should never have been permitted to remain as the radical leftist Antitrust Authority General Director until his departure in August after he rejected the Netanyahu government’s replacement deal on May 26th. No self respecting energy development group will invest a shekel in Israel’s energy resources because it is no better than a third world country that doesn’t honor agreements. Israel may have just screwed itself out of the future source of wealth that would alleviate social disparities and the budgetary burdens of national defense. Prime Minister Netanyahu is now caught in a nearly impossible task to  push through this new agreement on July 6th given the makeup of his ruling coalition. The fictional book and film character Forest Gump has the last word on those populist protesters in Israel, “stupid is as stupid does”.

EDITORS NOTE: This column originally appeared in the New English Review. The featured image is of Tel Aviv offshore gas deal protesters, July 4, 2015. Source: Jerusalem Post.

The Key to Winning Hearts and Minds in the Energy Debate

Last year I got the chance to speak to a group of dozens of Presidents and CEOs in the fossil fuel industry. In advance of the speech, I decided to synthesize, as concisely as I could, everything I knew about the case for fossil fuels—and how to make it incredibly persuasively. I wanted them to start to be able to do the same after my talk.

The essay was so well-received I decided to release it to the public as a short pamphlet called “The Moral Case for Fossil Fuels: The Key to Winning Hearts and Minds.”

It’s now been read by hundreds and possibly thousands of people, including energy industry senior executives, and as you can probably tell from the title, it was optioned for a book deal by Penguin, and it slowly transformed into my new book, The Moral Case for Fossil Fuels is now available on Amazon.

Obviously, this piece is written for energy industry communicators, but it has valuable lessons for anyone who agrees with the ideas in the materials I’ve sent you so far and wants to make an impact on the fossil fuel debate. (It’s also on the shorter side of what I’ve sent you so far.)

After you’ve read it, feel free to share it far and wide, though I recommend you send friends, family and colleagues to, so they can get all the other materials, too.