Skip to content

Welcome to the American West at Risk Blog

The American West at Risk chronicles the road our nation has taken to its current catastrophic environmental state. The authors tour the U.S. to discuss challenges our nation faces & examine viable solutions. Responding to requests for more information the environmental team has launched this Blog.

Lures To Energy Complacency Part II

July 2, 2012

This is Part II of a 5-part blog that addresses the various pressures brought on American society to believe that we have an abundance of oil and gas and therefore no problems with alarmists who warn of impending shortages. “Energy is the key which unlocks all other natural resources”.1 It is essential that we not delude ourselves about its availability so that we can prepare ourselves for the future.

If snake oil2 could be refined into gasoline and diesel fuel, the nation would have no foreign oil dependency problem. It is always plentiful and cheap. Snake oil is the fuel and balm that lures us to curse those we imagine are responsible for high gasoline prices, to grasp at “vast new” fossil energy sources, to ignore peak oil, and to banish fears of climate change, all the while assuring us that technology will solve any impediment to our god-given right to enjoy life as we always have in the past.

1Eugene Ayres and C.A. Scarlott, Energy Sources – the Wealth of the World (McGraw-Hill, 1952)

2The term “Snake oil”, folk etymology has it, originated as a corruption of Seneca Oil. Seneca Indians supposedly had been observed to use crude oil from surface seeps as a liniment. Whatever its origin, it has become a generic name for panaceas or miraculous remedies whose ingredients, unknown to the buyer, are mostly inert or ineffective. That is the sense in which I use it (Wikipedia, http://en.wikipedia.org/wiki/Snake_oil; see also Rosie Mestel, Snake Oil Salesmen Weren’t Always Considered Slimy, Los Angeles Times, July 1, 2002)


Part II. The Oil Boom(let)

Of course, if we produce enough of our own oil, surely we can control our own gasoline prices—can’t we? There is a virtual tsunami of claims issuing directly and indirectly from snake oil country that we have entered a new golden age of oil and gas production from our very own lands and waters that will erase any worries we may have about future supplies (note that many of the claims below are quoted, not made, by authors of the references cited):

“Across the country, the oil and gas industry is vastly increasing production, reversing two decades of decline. Using new technology and spurred by rising oil prices since the mid-2000s, the industry is extracting millions of barrels more a week, from the deepest waters of the Gulf of Mexico to the prairies of North Dakota.”1

“Forget declining oil, there is a new global oil rush. The US has an estimated 2 trillion barrels of shale oil reserves—about 70% of the world’s total and eight times the oil reserves of Saudi Arabia” 2

Senator James Inhofe, Oklahoma stated on a video filmed at the Senate Environment and Public Works Committee that “America could be energy independent in a matter of months if Government would just get out of the way”3

Jerry Schuyler, President and Chief Operating Officer, Laredo Petroleum, “cited some experts who say by 2015 the nation will be producing 9.1 million barrels, equal to its 1970 peak.“4

“Surging production in shale formations has transformed the U.S. energy landscape, flooding the market with gas and boosting domestic oil production by 14 percent from three years ago after dropping by a third in the previous 17 years.”5

Citigroup projected U.S. oil production to “rise from 5.8 million barrels per day actual production in 2011 to 7.5 in 2015, and 10.2 in 2020”, the latter creating a new U.S. production peak.6

So, how good is the “good news”? To help put things in perspective, Figure 1 shows U.S. oil production from 1981 to 2011.7 The graph includes oil produced from three types of reservoirs: conventional reservoirs with properties that allow high to moderate rates of flow of oil through the reservoir rocks to the well head, and

Figure 1. Click to enlarge

unconventional reservoirs such as cemented sandstones and coal beds with low flow rates, and shale formations with very low flow rates.8 Other unconventional reservoirs are those posing particularly difficult access, such as deep water locations.

The high point on the graph in the 1980s includes the effects of the Prudhoe Bay super giant field in Alaska the largest field ever discovered in the U.S., which represents a significant bump on the declining curve from peak production (9.6 million barrels per day (bopd) reached in 1970, and deep-water production, which had a slight production increase in 2008. Oil from the Bakken shale9 in North Dakota contributed most of the thin sliver with a tiny 2010-2011 bump shown in purple, and the Eagle Ford and other shales in Texas contributed to the small 2010-2011 rise in Texas total production. Combined daily production from Bakken and Eagle Ford shales in early 2012 (~564,000 barrels of oil per day–bopd) is 3% of current U.S. daily oil consumption (~18.5 million bopd), and that small production substantially exceeds
contributions from other similar sources (Figure 2).10

Oil production from shale formations is the shaky base of the snake oil purveyors’ claims of “vastly increasing oil production” that will soon make “the U.S. the world’s leading oil

Figure 2.  Click to enlarge

producer” because the major sources of US oil (conventional and offshore reservoirs) are in decline. Small increases in total production such as are being made can affect the decline rate because it has been nearly flat for some time.11 The decline curve is being controlled in part by production from nearly 350,000 stripper wells—wells producing less than 10 barrels of oil per day (Figure 3); in 2008 the range of oil production from stripper wells was 0.3 to 6.5 bopd with an average 1.9 bopd.12 Stripper wells mostly represent the dregs of nearly exhausted oil fields kept alive by high prices and expensive recovery enhancement techniques.

Figure 3. Click to enlarge

The 2011 production from the Bakken and Eagle Ford formations represent large increases over preceding years and they are likely to increase more in coming years. This is propelled in part by high oil prices13 and by supplanting conventional drilling with horizontal drilling and artificial fracturing (hydraulic fracturing aka fracking)14, which more efficiently extract the oil. Nevertheless, January 2012’s increased production from the Bakken came from 6,617 wells yielding an average production of only 82 bopd.

Shale formations, which give up their oil with difficulty, are important sources that improve our production outlook, at least in the near term. But, “the shale revolution did not begin because producing oil and gas from shale was a good idea but because more attractive opportunities [for conventional oil and gas] were largely exhausted.”15 Drilling horizontal wells with completion costs are much more expensive, ranging to 3 times or more the cost of conventional vertical wells.16 Not only that, but the costs of drilling and completion are incurred whether the well turns out to be good, bad, or indifferent. With conventional wells, a well’s potential is known when the target zone is reached; bringing a good well on line takes millions of dollars more in completion costs. Horizontal fracked wells on the other hand cannot be assessed until completed—that is, until after they have been fracked, a substantial fraction of total well costs.17 One bit of information that is exceedingly difficult to come by is the number and location of failed wells, either dry holes or short-lived wells, in shale plays.

Many shale formations have large geographic extents (Figure 4). On the very shaky assumption that oil and/or gas are more-or-less uniformly distributed in the formations, a great “black gold rush” quickly developed to gain lease access to the mother lodes. It turned out that the assumption was wrong: data from thousands of wells in three shale gas plays show productive areas (“sweet spots”) surrounded by large much less productive areas.18 ”Sweet spots” are readily revealed by drilling patterns (Figure 5), which in the oil patch are like the occasional huge gold nuggets that dominated gold production figures in the California gold rush. Like the nuggets, sweet spots in shale deposits are found early and the search progresses quickly to more and more marginal occurrences.19

A major problem with horizontal fracked oil wells is they don’t last very long. Production declines very rapidly, as much as 90% in the first year. A typical depletion curve for wells in the Bakken Formation (Figure 6) indicates that a well with initial production of 1,000 bopd is down to 200 bopd at the end of less

Figure 5. Red line is border of Bakken shale

than two years and down to 100 bopd in 5 years. Even worse is that 1,000 bopd initial production is uncommon, and average wells come in at something like half that or less, still with high decline rates.20 Good data from the Eagle Ford shale indicates a

1,000 bopd well is typically doing 100 bopd or less after one year.21 High decline rates in effect put production from tight formations on a drilling treadmill to maintain or increase

Figure 6. Typical Bakken well production decline curve. Click to enlarge

reserves. New wells must bedrilled continuously to counterbalance the declining wells with new ones early in their production history.

We do have precedents for what is the likely future of horizontal-drilling and fracked oil wells. The first significant oil reservoir to be developed with hydraulically fractured horizontal wells was the Austin Chalk, which crosses southern Texas, Louisiana, and Mississippi. This is a tight limey mudstone that produces oil from complex natural fracture systems. In the early days of vertical drilling, production enhancement consisted of acid solution to increase near-fracture permeability. In 1990 horizontal drilling and fracking began rapidly to supplant vertical wells. Initial production of wells was high, reaching 840 barrels of oil plus gas per day but declining at a rate of 90% for 6 months to a year and then leveling off to a decline rate averaging about 35% per year.

The Austin Chalk enjoyed the same promotional hype as shale plays today, but it is now for the most part dead even at high oil prices. The reason is the high rates of production could not be maintained by continuous drilling of new wells because drillers ran out of promising places to drill the formation.22 A map of the wells drilled and now mostly abandoned “is amazing: one long horizontal well drilled right next to another….a solid slash of black oil well symbols” extending over more than 5 million acres.23 This eventually will be the fate of the Bakken, Eagle Ford, and other shale oil plays. It happens much more quickly than with conventional sources, and at much lower cumulative productions.  Shale oil is not the panacea hyped by snake oil purveyors, but it does give us a little breathing room. To “Rockman” “The real question is will we make good use of this temporary stay of execution.”

Sources and Notes

1Clifford Krauss and Eric Lipton, Inching Toward Energy Independence In America, New York Times, March 22, 2012

2Alan Kohler, The Death of Peak Oil, Crikey, 29 February 2012

3”Tstreet”, The Oil Drum, Drumbeat comments, March 23. 2012. Contributors to The Oil Drum’s posts are generally identified by nicknames, such as Tstreet.

4Mella McEwan, Speakers Detail Potential Permian Basin Still Holds,

mywesttexas.com, April 5, 2012

5Dave Cohen, An unconventional play in the Bakken, ASPO-USA / Energy Bulletin, April 16, 2008

6”Heading Out”, A Review of the Citigroup Prediction on US Energy, The Oil Drum  April 1, 2012

7”Gail the Actuary”, The Myth That the US Will Soon Become an Oil Exporter, The Oil Drum, April 20, 2012 Data from the Energy Information Administration

8Flow rates of oil through reservoir rocks are controlled dominantly by permeability, which is a measure of the interconnectedness of pore spaces in which the oil resides. The lower the permeability, the more difficult it is for fluids to migrate through the rock

9The Bakken shale comprises thin (6 to 15 feet thick) sandstones from which most of the oil is extracted sandwiched by thicker shales

10The Monterey play show in Figure 4 is said to have more than 4 times the estimated technically recoverable oil than the Bakken and Eagle Ford (U.S. Energy Information Administration, Review of Emerging Resources: U.S. Shale Gas and Shale Oil Plays, July 2011), but so far development results are mixed. It is currently being promoted as a “vast reservoir, containing billions of barrels of oil” (John Cox, Monterey Shale Brightens Kern’s Oil Prospects, Bakersfield Californian, June 9. 2012)

11While the US has shown a small increase in crude oil production, up from the pre-hurricane rate of 5.4 mbpd in 2004 to 5.7 mbpd in 2011, a net increase of 0.3 mbpd, this is virtually a rounding error in the context of the multimillion barrel per day declines that we have seen in Global Net Exports of oil (westexas, The Oil Drum, Drumbeat comments, March 22, 2012; see also, westexas, The Oil Drum, Drumbeat comments, August 14, 2011)

12Interstate Oil and Gas Compact Commission, Marginal Well Report, 2009. For general information on U.S. stripper wells: “Heading Out”, Tech Talk – American Stripper Well Production, The Oil Drum, May 22, 2011

13Oil prices that only recently approached $104 per barrel have been in decline in May and June 2012, and in late June fell below $80 per barrel, threatening new development of shale oil. A. E. Berman considers shale oil development to be commercial only at prices above $80/bbl (Individual statements in support of ASPO-USA’s letter to Energy Secretary Steven Chu, Energy Bulletin, October 26, 2011)

14Drilling a horizontal well starts with a vertical well drilled down close to the target zone. The well is then deviated at an angle to intersect the pay zone. The trick is to keep the well within the pay zone for distances of up to two miles and more. For example, the pay zone in the Middle Bakken is only a 6-15 feet thick layer of sandstones, siltstones, and carbonates sandwiched by shale. Keeping the drill in so thin a zone for a distance of two miles at a depth of about 11,500 feet takes a lot of skill. Fracking is carried out by pumping water, sand, and a witch’s brew of chemicals, some toxic, some carcinogenic, into the horizontal lateral at pressures high enough to overcome the tensile strength of the rock, thus fracturing it promoting release of the gas. The entrained sand, called proppant, is supposed to hold the fractures open as the fluid is allowed to drain away—otherwise the pressure of overlying rock would collapse them. The long holes cannot be fracked in one step, so the well is sequentially fracked using plugs to isolate sections for treatment. This takes a lot of water—3 to 5 million gallons per well—and shale gas wells use more than 4 million pounds of proppant per well. A 4 million gallon fracking operation uses from 80 to 330 tons of chemicals. Simulations of the process on the internet always show fracturing limited completely to the pay zone, a convenience not experienced in real fracking, during which fractures may extend in unwanted directions for unwanted distances, causing unintended problems (Earthworks, Hydraulic Fracturing 101, undated http://www.earthworksaction.org/issues/detail/hydraulic_fracturing_101)

15Arthur E. Berman, After the Gold Rush: A Perspective on Future U.S. Natural Gas Supply and Price, The Oil Drum, February 8, 2012 . The shale deposits have long been known as source rocks for much more concentrated oil and gas deposits. Oil and gas seeping from the source rocks accumulated in traps capable of holding much larger quantities of oil and gas.  The small quantities retained in the source rocks were not of commercial interest so long as much larger sources could be exploited.

16Tom Whipple, The Peak Oil Crisis: Parsing the Bakken, Falls Church News-Press, March 21, 2012. Drilling and completing a horizontal well in the Middle Bakken formation, the main pay zone, costs $5 million or more

17”Rockman”, The Oil Drum, Drumbeat comments, February 8, 2012

18Ian Urbina, Insiders Sound An Alarm Amid A Natural Gas Rush, New York Times, 25 June 2011. A.E. Berman estimates that core productive areas of individual shale deposits that have potential for commercial production occupy as little as 10 to 20% of deposits (ASPO-USA, Geologist Berman: Shale Gas Reserves ‘Substantially Overstated’, Energy Bulletin, July 19, 2010)

19Derik Andreoli, The Bakken Boom – A Modern-Day Gold Rush, The Oil Drum, December 12, 2011. As gold production declined, new technology was invoked in the form of hydraulic mining. This did little to improve production and a great deal to harm the environment. This is being repeated in spades with modern open-pit gold mining. The oil patch equivalent is a countryside forever marred by thousands of football-sized well pads, pipelines, roads, and fracking sand pits, and the wreckage of Canada’s boreal forests by open-pit mining of tar sands, not to speak of the less visible pollution of air and water

20Currently advertisements are running on The Oil Drum, Drumbeat offering Bakken wells for sale. Potential buyers should be interested in knowing the initial production rate and how long a well has been producing

21”Rockman”, The Oil Drum, Drumbeat comments, October 31, 2011. Note that some industry and investment firms quote substantially higher initial production figures for the Bakken, but they do so in “barrels of oil equivalent,” which includes natural gas much of which is burned off (flared) at the well head and has a much lower value than oil; at current prices the gas has no commercial value in North Dakota

22”Tstreet”, The Oil Drum, Drumbeat comments, August 14, 2012

23“Rockman”, The Oil Drum, Drumbeat comments, August 14, 2011 

Advertisements

Lures to Energy Complacency Part I

July 1, 2012

This is Part I of a 5-part blog that addresses the various pressures brought on American society to believe that we have an abundance of oil and gas and therefore no problems with alarmists who warn of impending shortages. “Energy is the key which unlocks all other natural resources”.1 It is essential that we not delude ourselves about its availability so that we can prepare ourselves for the future.

If snake oil2 could be refined into gasoline and diesel fuel, the nation would have no foreign oil dependency problem. It is always plentiful and cheap. Snake oil is the fuel and balm that lures us to curse those we imagine are responsible for high gasoline prices, to grasp at “vast new” fossil energy sources, to ignore peak oil, and to banish fears of climate change, all the while assuring us that technology will solve any impediment to our god-given right to enjoy life as we always have in the past.

Part I. The Price of Gasoline: It’s the President’s Fault

There’s a lot of anger right now about the cost of filling the tanks of our cars, which in my neighborhood was recently knocking on $4.50 per gallon. Snake oil purveyors like to put the face of President Obama on the gas pump, assuring us that we have only to replace him next November and the problem will vanish. The message is also cast in words:3 Republicans sought to keep the pressure on President Obama over high gas prices Saturday (April 7, 2012) with a radio speech claiming his “lack of leadership” is creating an “energy crisis.” “Americans are paying the price for [the President’s] failed policies, finding fewer jobs, higher gas prices, and less opportunity” said Oklahoma Gov. Mary Fallin in the weekly GOP address.

Figure 1 suggests, however, that it might be better if we shut up and count our blessings (depending on one’s point of view) because Americans pay far less for gasoline than our European friends and most of the rest of the world, the difference being largely in taxes paid on the gasoline, not a difference in the base cost of gasoline, which is

Figure 1. Gasoline Prices U.S. and Europe. Click to enlarge

universally highly subsidized.4 Figure 1 conveys another important fact: the price of gasoline in France, Italy, Belgium, Netherlands, and UK fluctuates in a manner virtually identical to that of the same grade of gasoline in the United States, just at different price levels. If the U.S. had any control over the price of gasoline, it must also be controlling the price in Europe, which hasn’t been suggested even by snake oil purveyors.

Since gasoline is refined from oil, the price of oil has basic sway over the price of gasoline, exclusive of taxation and subsidies. If the U.S. had any control over the price of oil, one might expect to see a correlation between domestic oil production and the price of gasoline in the U.S., a correlation in which the price of gasoline decreased as production increased.
Figure 2 shows that there is no such correlation.5 The amount of oil produced by the
U.S. is too small to affect global oil prices and therefore the price of gasoline, and is too small to allow us to independently meet our desires.

Figure 2. U.S. oil production vs gasoline price Click to enlarge

Sources and Notes

1Eugene Ayres and C.A. Scarlott, Energy Sources – the Wealth of the World (McGraw-Hill, 1952)

2The term “Snake oil”, folk etymology has it, originated as a corruption of Seneca Oil. Seneca Indians supposedly had been observed to use crude oil from surface seeps as a liniment. Whatever its origin, it has become a generic name for panaceas or miraculous remedies whose ingredients, unknown to the buyer, are mostly inert or ineffective. That is the sense in which I use it (Wikipedia, http://en.wikipedia.org/wiki/Snake_oil; see also Rosie Mestel, Snake Oil Salesmen Weren’t Always Considered Slimy, Los Angeles Times, July 1, 2002)

3David Jackson, GOP Faults Obama For ‘Energy Crisis,’ USA Today, April 7, 2012

4Jeff Bingaman, Oil Prices, Gas Prices and Domestic Production, U.S. Senate Committee on Energy & Natural Resources, Floor Speech, March 7, 2012. Senator Bingaman is one of the very few people in the US Congress with a broad understanding of energy resources

5David Roberts, The Only Solution to High Gas Prices – With Charts, Grist, March 13, 2012

Public Lands Development of Solar and Wind Energy—A Runious Policy

March 6, 2012

Spurred by concerns over dependence on foreign energy sources and looming global climate-change problems, development of renewable energy on public lands in western U.S. began in earnest in 2005. Prior interest was limited,1 but  the 2005 Energy Policy Act declared that before 2015 the Secretary of the Interior should seek to have approved solar, wind, and geothermal projects on public lands with a generation capacity of at least 10,000 megawatts, strengthening interest and a shift from private to public lands.  Development was boosted by qualification of projects for stimulus funding,2 Department of Energy loan guarantees, and a Section 1603 cash grant program (allowed to lapse March 2012) administered by the Department of Treasury. With these financial incentives, applications for projects on public lands quickly ballooned, reaching more than 170 in California alone according to the California State Director of BLM. Many were speculative, and few, if any, were selected on the basis of environmental suitability.

Support for replacing fossil energy with renewable sources is likely total in the environmental community, but the question is not whether but how to replace oil, coal, and gas.  The rush to public lands has resulted in something of a rift between grass-roots conservation groups, passionate in their defense of southwestern deserts,3 and several large national environmental organizations willing, however reluctantly, to cut deals of non-opposition to destructive public lands developments out of concern for impending climate disaster and to gain some advantage for endangered species impacted by particular projects.4

In the midst of the chaos, Interior Secretary Salazar initiated in 2009 a fast-track policy to expedite approval of a number of proposals already under review, while assuring protection of lands of high natural value and full public participation in project approval. Neither happened.5

All of the projects on the books for fast-tracking are utility-scale developments, mainly solar and wind projects, and long-distance transmission lines. Environmental Impact Statements for fast-tracked projects summarily dismissed an alternative of distributed solar development in already disturbed lands in and near urban areas—e.g., development of brownfields6 in urban areas, 13 million acres of non-urban brownfields at 480,000 sites cited by the EPA, and use of rooftop photovoltaic (PV), which has huge potential. The rationale for rejecting distributed energy alternatives (fully explained in the Ivanpah Solar Energy Generating System Environmental Impact Statement (EIS)7 is based on supposed inability to ramp up production and installation of PV to substitute for the planned capacities of the utility-scale solar thermal plants, individually and collectively. While admitting “dramatic” reduction in cost of PV between 2007 and 2009, it was still asserted that the existing California program would have to be made much more aggressive, and be better funded to compete with the utility-scale projects being proposed. The rationale is essentially a “can’t do even if we try” statement, concluding that “These considerations indicate that implementation of distributed solar technology at the scale needed (400 MW for the specific project under study) is remote and speculative, and would likely be technically and economically infeasible.”

The various reasons for rejecting distributed generation in the 2009 Ivanpah EIS are dated and no longer valid, but the conclusion is apparently still the governing Department of Interior (DOI) policy. PV costs continue to plummet, now reaching 58% between 2006 and 2012, courtesy of China,8 and the price of grid electricity has risen an average 2% per year in the same period, bringing us ever closer to parity with grid prices exclusive of fast-track projects. The amount of installed distributed PV in California has nearly doubled over that cited in the EIS, providing more power than planned for the Ivanpah project, which has yet to produce one watt. New California programs are making distributed generation more accessible and cheaper.9 Still more promising approaches, emulating successful European programs, give strong support for changing our policies.10

Opportunity for rapid expansion of a robust distributed PV generation system in California is clear from the huge existing rooftop and brownfield potential, which exceeds by far the capacities of all utility-scale projects, approved and under review, combined.11 The fast-track program is predicated on urgent need for increased renewable energy, that is, an aggressive development program such as needed for rapid expansion of distributed generation. What is missing is real consideration of an alternative to utility-scale generation. An unstated problem driving utility-scale power plant development on public lands is that distributed energy generation gives smaller roles to the middlemen (utility companies).12

Urgent Utility-Scale Power Plants or Urgent Distributed Generation?

The climate change imperative is very real and of critical importance.  CO2 in the atmosphere takes a long time to decay so that additions humans have made and are making are cumulative, now adding up to dangerous proportions. The current global rate of increase of greenhouse emissions is 3.1% annually. This means that whatever we do to reduce sources of CO2 emissions will not prevent rising global temperatures any time soon. Constructing low-carbon power infrastructure, whether at utility-scale or distributed, involves fossil fuel emissions. This introduces a “carbon debt” that must be paid until production of carbon-free energy reaches break-even and a net reduction in emissions begins. The lag times to achieve meaningful reduction of greenhouse gases are substantial—we will not likely head off damaging warming trends in the better part of this century. If, however, this problem is not addressed aggressively and immediately, environmental degradation to which no species is immune will be widespread.13

Allowing utility-scale power plant construction on undisturbed public lands on condition that other high-quality lands are given “permanent” protection is a net loss of intact ecosystems whichever way it is viewed. We should be working to protect all such lands.

Major damages to desert ecosystems caused by construction of utility-scale solar and wind power facilities and transmission lines are immediate and establish conditions that cause wider environmental degradation far into the future.14 Given the high potential for distributed solar generating capacity, a different federal strategy can avoid those damages. Instead of parceling out valuable public lands for destructive development, a major role for the federal government exists in solarizing and serious upgrading of the energy efficiency of government-owned facilities, and, not least, adoption of policies that truly protect our most valuable resources—intact ecosystems. A vigorous federal role in reducing energy consumption will quicken the rate of climate protection.15

Endnotes

1Between 1984 and 1990, nine Solar Energy Generating Systems parabolic mirror thermal power plants were built in the Mojave Desert on private lands, together constituting what was then the world’s largest solar installation. A small 10 MW experimental power-tower solar plant was built in 1982, and ceased operations in 1988. The plant was converted to a molten salt process in 1996 and was decommissioned in 1999. The total installed power of the still-functioning 9 SEGS plants is 354 MW, operating at ~21% capacity factor. Night time energy is produced by burning natural gas. Small wind projects, totaling about 254 MW were built on public lands in the 1980s and 1990s. At least some of these have been decommissioned.

2Authorized by The American Recovery and Reinvestment Act of 2009. Current efforts by the U.S. Congress are underway to cut or eliminate Department of Energy loan guarantees for solar projects (David Roberts, Conservatives Want To End Support for America’s Fastest Growing Industry, Grist, 18 October 2011)

3For example, Desert Survivors, Basin and Range Watch, Desert Protective Council, and Solar Done Right. These grass-roots groups have superior direct knowledge of the egregious impacts of projects under construction and are well-informed of the benefits of distributed generation in avoiding those impacts

4Some actions opposing particular projects have been taken by national environmental organizations: the Sierra Club filed a lawsuit in December 2010 against the Calico Project, which was rejected by the California Supreme Court in April 2011. Another unsuccessful lawsuit was filed in January 2011 by Advocates for the West on behalf of five groups, including The Center for Biological Diversity

5Heading off use of undisturbed desert lands for solar and wind development was supposed to be accomplished by a 6-state study by the Bureau of Land Management and the Department of Energy (Draft Programmatic Environmental Impact Statement for Solar Energy Development in Six Southwestern States). The favored plan designated Solar Energy Zones (SEZs), found to have “few impediments to utility-scale solar power plants” where development would be prioritized. This plan did not rein in the numerous projects on lands of high value already on fast-track although many are still not under construction, and declared in a supplement to the DPEIS that BLM would continue to accept applications outside the SEZs. Full public participation in the review process was stymied by issuance of many large EIS documents with short, overlapping review periods.

6So-called brownfields—lands that are contaminated by previous industrial uses, closed landfills, or other underutilized lands in urban areas—are providing distributed solar power development opportunities. In 2005 The Government Accountability Office estimated 450,000 to 1 million brownfields in the U.S. (U.S. Government Accountability Office, Brownfields Redevelopment: Stakeholders Report That EPA’s Program Helps to Redevelop Sites, but Additional Measures Could Complement Agency Efforts, GAO-05-94 (Washington, D.C.: December 2, 2004). The EPA identified more than 3 million acres of brownfields in urban areas and 13 million acres of non-urban brownfields at 480,000 sites with potential for renewable energy development (Penelope McDaniel, Re-Powering America’s Land: Renewable Energy on Contaminated Land and Mining Sites, U.S. EPA OSWER Center for Program Analysis, December 10, 2008)

7Final Staff Assessment and Draft Environmental Impact Statement, Ivanpah Solar Electric  Generating System (07-AFC-5), p. 3-93, 94; p. 4-2, 4-62 to 4-66

8David Roberts, Solar ‘Scandal’ Upshot: China Is Dominating Global Solar Market, For Better Or Worse, Grist, 6 April 2012

9The economics of full rooftop photovoltaic (PV) development are being greatly eased by fast-growing innovative financing that makes it easy for property owners to tap into clean distributed energy without federal subsidy. Rapidly plunging prices of PV panels is promoting distributed power generation as well as causing some fast-track projects to apply for conversion of approved utility-scale solar thermal plants to PV.  A national study of commercial and residential rooftop availability for grid-connected solar PV found ample potential to supplant a major portion of current and anticipated electricity consumption in the U.S. (.J. Paidipati, et al, Rooftop Photovoltaics Market Penetration Scenarios, National Renewable Energy Laboratory, Subcontract Report NREL/SR-581-42306, 2008). A review of this study presented to the DOE by the Energy Foundation (March 1, 2005), states “Rooftop space is not a constraining factor for solar development. Residential and commercial rooftop space in the U.S. could accommodate up to 710,000 MW of solar electric power (if all rooftops were fully utilized, taking into account proper orientation of buildings, shading from trees, HVAC equipment and other solar access factors). For comparison, total electricity-generating capacity in the U.S. is about 950,000 MW”

10John Farrell, Rooftop Revolution: Changing Everything with Cost-Effective Local Solar, Institute for Local Self-Reliance, March 2012. Significant advantages of full-scale adoption of distributed generation include local concentration of jobs and economic benefits of PV development and avoidance of the costs—infrastructure and transmission power losses–of long-distance transmission of power generated by remote utility-scale power plants. Indeed, avoidance of transmission costs come close to canceling the economies of scale enjoyed by utility-scale generation (John Farrell, Distributed Generation Hits Sweet Spot in Cost v. Transmission, Institute for Local Energy Self-Reliance, February 27, 2012). Not least in this advantage is avoidance of the severe damages caused by construction of transmission lines (Howard G. Wilshire, Jane E. Nielson, and Richard W. Hazlett, The American West at Risk: Science, Myths, and Politics of Land Abuse and Recovery (New York, Oxford University Press, 2008), Ch. 5)

11California’s potential for rooftop PV development alone is estimated to be 81,000 MW (J. Paidipati et al., 2008). The potential for distributed power including brownfield and other underutilized urban lands is much greater. Total capacity for utility-scale solar and wind power facilities approved and planned for fast-tracking through 2012 is 14,000 MW. Only five of 37 projects are under construction, and none has produced any power to date. Note further that the stated project capacities are nameplate capacities, which represent the maximum energy production possible under the most favorable conditions (achievable for approximately 3-4 hours per cloudless, daytime conditions). Actual production is defined by a capacity factor, which takes into account nighttime, cloud cover, and downtime for maintenance—capacity factors range between ~20% and 30% of nameplate capacities

12It is noteworthy that local power generation projects funded by utility companies are increasing in number after initial opposition

13The reasons for urgency in dealing with the climate problem are well-explained by P. Myhrvold and K. Caldeira, Greenhouse Gases, Climate Change and Transition From Coal To Low-Carbon Electricity, Environmental Research Letters, 7 (2012) 014019.  According to these authors, achieving substantial reductions in temperatures relative to the coal-based system will take the better part of a century, and will depend on rapid and massive deployment of some mix of conservation, wind, solar, and nuclear, and possibly carbon capture and storage (A very readable précis of this article is David Roberts, Myhrvold Finds We Need Clean Energy Yesterday (And No Natural Gas) To Avoid Being Cooked, Grist, 28 February 2012)

14Wilshire et al., Chs. 5, 12

15Despite ceding our leadership in greenhouse gas emissions to China, the U.S. remains the leader, by far, in per capita emissions, reflecting our huge consumption of all forms of energy (Union of Concerned Scientists, Each Country’s Share of CO2 Emissions, 20 August 2010 (data for 2008)

Anatomy of Japan’s Nuclear Crisis

April 12, 2011

The massive 9.0 magnitude earthquake that hit the northeast coast of Japan March 11, 2011 and the awesome tsunami it generated created a nuclear crisis for Japan and nuclear power worldwide. The release of dangerous radioactivity1 from damaged reactors at Fukushima Dai-ichi (Fukushima-I) nuclear power plant is of particular concern. In the following I try to explain what may have happened to and in the reactors, and examine some potential outcomes.

Figure 1. Fukushima Dai-ichi plant, pre-earthquake. Tokyo Electric Power Co.

Fukushima-I, one of the world’s largest nuclear power plants (installed capacity 4,696 megawatts), consists of 6 reactors (Fig. 1) located close to the ocean for convenient access to cooling water. In Fig. 1 the reactor buildings are box-like structures, No. 4 is the closest building in the photograph with Nos. 5-6 in order at the far end. Tokyo Electric Power Company.

As our previous blog explained, the northeastern coast of Japan is a subduction zone, where the Pacific plate is being shoved under the North American plate, occasionally giving rise to major earthquakes.2 The earthquake epicenter was about 109 miles off-shore, northeast of Fukushima.3 It caused many minutes of severe earth-shaking4 in a coastal belt that extended southward to Tokyo. In the first week alone, the area also experienced three aftershocks of magnitude 7 or greater, and 49 of magnitude 6 or greater, adding to the potential for additional shaking and tsunami damage.5

The earthquake automatically shut down three operating Fukushima-I plant reactors (Nos. 1-3) by inserting control rods to stop the nuclear fission chain reaction (http://www.answers.com/topic/chain-reaction). The remaining 3 reactors (Nos. 4-6) had been shut down before the earthquake for maintenance. Whether shut down or not, continuing to cool the reactors is essential because radioactive materials in the fuel rods, produced prior to shut down, will continue to decay and produce heat. The temperature of the fuel will continue to rise unless cooled.6 Waste spent-fuel7 rods stored in on-site pools also require constant cooling.

But the shock of the earthquake disconnected the plant from the regional electrical grid, preventing it from receiving AC power. Damage from the huge tsunami following the initial shock further disrupted connections to the national power grid, causing “a station blackout”—loss of external power to the entire plant. The tsunami also flooded the plant’s backup power generators, compromising its cooling systems. Backup battery power lasted only about 8 hours, inadequate for a weeks-long loss of principal power sources. When cooling water stopped flowing, fuel rods in the shut-down reactors and in the spent fuel ponds began overheating.

Figure 2. Cut-away view of Fukushima-I reactor type. Blue color represents cooling water. Wikimedia Commons

A diagram of the Fukushima-I type of reactor and its housing in Fig. 2 shows the various components involved in this disaster: the reactor vessel contains the active fuel rods and nuclear reactions; primary containment is a free-standing steel container drywell, orange line; secondary containment is a concrete shield wall surrounding the reactor vessel; and the spent fuel pool holds used-fuel rods. The wetwell or torus, is probably steel-walled and may be part of the primary containment, and the reactor building is what you see from outside. All these components provide the heat that generates electricity in turbines, housed nearby in turbine halls.

The fuel rods consist of ceramic pellets of materials made of easily-split (fissionable) atoms. Fukushima-I reactors 1, 2, and 4-6 fuel rods contained uranium, but rods in reactor 3 contained mixed-oxide (MOX) of uranium plus plutonium. The rods were surrounded by a sheath (or cladding) of a stable zirconium (metal) alloy. So long as cooling water flowed through the reactor, the sheath did not overheat and so contained the fuels and fission products.

When the Fukushima-I reactors lost power, cooling water stopped flowing, so water in the reactor vessels heated to boiling and turned to steam. The water levels fell, exposing at least the upper parts of hot fuel rods. The zirconium alloy sheath then chemically interacted with steam to form zirconium oxide and hydrogen, causing the sheath to break down. This chemical interaction is highly exothermic, releasing large amounts of heat and raising the fuel temperature even more rapidly. In a positive feedback loop, greater heating speeds the deterioration of zirconium alloy sheaths,8 releasing hydrogen and radioactive fission products.

Hydrogen and steam were building up in the reactor vessels, threatening to reach pressures that could rupture them, so the plant operators chose to vent some of these gases to the atmosphere. Hydrogen is highly explosive, so the plant was designed to carry hydrogen and steam through pipes and vent some distance from the plant. Instead hydrogen accumulated within the reactor buildings and eventually exploded. Hydrogen explosions caused severe damage to the unit 1 reactor building on March 12, to unit 3 on March 14, and to units 2 and 4 on March 15.9

There are two potential sources for the hydrogen:  fuel rods in the reactor vessels and spent fuel rods in cooling pools. Hydrogen coming from reactor vessels would mean either that a breach had opened up in the primary containment releasing hydrogen, steam, and radioactive fission products into the reactor buildings, or disruption to the buildings’ internal venting systems by earthquake shaking. In contrast the spent fuel pools, also dependent on the failed cooling systems, are outside the primary containment (Fig. 2). If their cooling water levels dropped sufficiently to expose the rods, they could emit hydrogen, steam, and radioactive fission products directly into the reactor buildings. A buildup of hydrogen in the buildings needed only a spark to detonate the flammable gas mixture.

Various experts have suggested ways that primary containment might have breached, including potential weaknesses in piping, decay of organic seals, and disturbance of the steel containment vessel’s seal (Fig. 2).10 Other suggestions include the possibility of fractures in and melting through containment structures. The processes of sheath deterioration, hydrogen production, and releases of radioactive fission products from fuel rods in the reactor vessels can eventually let fuel pellets spill into the bottom of the reactor vessel without melting.11 These sheath reactions are exothermic and may produce sufficient heat to melt the sheath. Whether melted or disaggregated, the radioactive fuel that gets into the bottoms of the reactor vessels is no longer affected by the fission-limiting effects of the control rods and is in danger of restarting heat-producing fission reactions.

Locating spent fuel pools inside the reactor building is a particular weakness of the Fukushima-I reactor design. The internal pools are vulnerable to earthquake disruption, with the potential for spilling cooling water and exposing the stored used-fuel rods. Once exposed, the used fuel rods would proceed to generate steam which, if hot enough will interact with the zirconium sheath, releasing hydrogen. This process might have contributed all or some of the hydrogen that exploded in the reactor buildings. Even if the pools were not principal contributors, the explosions so disrupted the reactor buildings that radioactive fission products from the spent fuel pools are now being emitted directly into the atmosphere (Figs. 3a and 3b).12

Figure 3a. Satellite photo of reactor buildings 1-4.   Green circles identify steam plumes from reactor buildings. DigitalGlobe

Use of mixed uranium/plutonium (MOX) fuel in Reactor 3 poses a special problem with its spent fuel storage because of the high toxicity of plutonium. Small amounts of plutonium have apparently been released and contaminate nearby soil.13

Discovery of high levels of radioactivity in areas as far as 36 miles from the plant site has caused officials to enlarge evacuation zones, increasing public concern.14 The reactors remain unstable and cooling systems are unrestored. The initial desperate efforts to cool the reactors and spent fuel pools by dumping of seawater on the plants in air drops and from fire engine pumps has been replaced by somewhat improved cooling with imported fresh water. The large amounts of salt deposited as the seawater turned to steam remains a serious problem of clogging the cooling systems of some reactors.

This poorly controlled flooding of the reactors has resulted in radioactive waters running off to the ocean, with daily reports of increasing contamination of seawater, and seepage into groundwater.15

Figure 3b. Oblique aerial view of reactor buildings 1-4, right to left. Steam plumes are from buildings 2 and 3, as in Fig. 3a. The severe damages to reactor buildings 3 and 4 allow direct release of radioactivity from spent fuel ponds to the atmosphere

These unresolved problems and the continuing releases of radioactivity to the environment led officials on April 12 to boost the crisis rating from 5 to 7, the highest rating on an international scale of nuclear accidents.

To stabilize the reactor sites now will require removal (and disposal) of huge amounts of contaminated water. Previous disposal plans never included the construction of decontamination plants and storage facilities, so current highly contaminated waters are being dumped in the ocean to make room for even more highly contaminated waters in the limited storage now available.16

In addition, the continued reports of short-lived iodine-131 contamination in and beyond Fukushima-I carry worrisome implications. The spent fuel in pools at the time of the earthquake would not contain much iodine-131 (half-life 8 days), so its appearance suggests it is coming from either a breached primary containment or fission reactions somewhere else in the plants, which are generating iodine-131 (and other radionuclides). If derived from a breached containment, the amounts of iodine-131 should be small, and should decline significantly from the date of reactor shut down.

Another possibility is that fission reactions are taking place in accumulations of fuel pellets that fell to the floors of reactor vessels or of spent fuel pools due to catastrophic deterioration of fuel rod sheaths, or in pools of melted fuel rods.17 The worst-case results of those potential events vary from melts burning through reactor vessels and into a water-filled torus (see Fig. 2) and either being cooled by the water (good) or causing violent steam explosions (bad).

If melts from fuel rods were to enter a dry torus, they could melt through the bases of the reactor buildings and migrate to groundwater beneath the reactors. This scenario could result in cooling of the melts by groundwater or violent steam explosions, resembling the phreatic volcanic eruptions that occur when hot rising magma (natural rock melts) encounters groundwater. The violent explosion scenarios would worsen the Fukushima disaster by orders of magnitude.18

We are all hoping for a less extreme outcome: successful stabilization of the reactors, that re-establishes controlled cooling to prevent overheating and further fuel deterioration in the reactors and spent fuel pools. Using seawater for emergency cooling of four of the six reactors has rendered them useless, so “stabilization” is merely a disaster-prevention step. It cannot be viewed as a solution to the eventual requirement to disassemble the damaged plants and their contaminated environs, and dispose of them in a manner that protects the environment and future populations for at least several hundred years. The suggested time frame allows for the decay of cesium-137 and strontium-90 to a minimal hazard level, and assumes no significant amount of plutonium-239 in the mix.

The eventual cleanup also will include removal and safe disposal of a very large amount of soil and rock in the unsaturated zone above the water table, which likely is heavily contaminated. Estimates of 30 years and $12 billion cost to scrap the damaged plants, based on very limited experience in Japan, are likely conservative.19

Acknowledgements

 

I have profited greatly from technical advice from Vernon Brechin, nuclear watchdog par excellence, Jane Nielson, geologist, and Ernest Goitein, former nuclear engineer.

Endnotes

1Limited information from new reports mostly identify iodine-131 (half-life 8 days) and cesium-137 (half-life about 30 years). Many other short-lived radioisotopes likely are being produced but decay quickly. The short-lived isotopes remain problems only so long as they are being actively released. Strontium-90 (half-life about 29 years) and very small quantities (so far) of Plutonium-239 (half-life 24,100 years) also are reported. Even though Strontium-90 and Cesium-137 together dominate the reactors’ long-lived fission products, Strontium-90 is generally not being reported from Fukushima-I. One rule of thumb estimates that longer-lived radioisotopes remain hazardous for 5 times the half-lives, others use a factor of 10 to estimate the hazard lives. Thus, cesium-137 and strontium-90 are the most abundant longer-lived isotopes being released and will remain serious problems for 150 to 300 years. Plutonium-239 and its decay products will be with us beyond the foreseeable future (H.G. Wilshire, J.E. Nielson, and R.W. Hazlett, The American West at Risk: Science, Myths, and Politics of Land Abuse and Recovery (New York, Oxford University Press, 2008), Chapters 7, 10)

2Jane Nielson, Nature Bats Last, blog https://theamericanwestatrisk.wordpress.com/

3Details about location of the Sendai earthquake epicenter: U.S. Geological Survey, Magnitude 9.0 – Near The East Coast Of Honshu, Japan, 2011 March 11 05:46:23 UTC. http://earthquake.usgs.gov/earthquakes/recenteqsww/Quakes/usc0001xgp.php)

4Unconfirmed reports give as much as 5 minutes of severe earth shaking (David Biello, Anatomy of a Nuclear Crisis, Yale Environment 360, 21 March 2011), but a filmed record of liquifaction during the Sendai earthquake in a landfill-based Tokyo park began after liquifaction started and lasted for 3 minutes, 8 seconds. Time of activity elapsed before and after film was made is not known

5262 aftershocks of magnitude 5 or greater occurred within the first week, 49 of magnitude 6 or greater, and 3 of magnitude 7 or greater (National Aeronautics and Space Administration, Earth Observatory), http://www.nasa.gov/topics/earth/features/japanquake/quake-intensity.html. A 7.1 magnitude aftershock that cut power to northern Japan occurred 07 April 2011 as a reminder that the story as yet has no end.

6Euan Mearns,Fukushima Dai-ichi Status and Slow Burning Issues, The Oil Drum, 25 March 2011. http://www.theoildrum.com/node/7706#more; Biello, Anatomy of a Nuclear Crisis

7Unfortunately, the industry term ‘spent fuel’ is misleading. The fuel starts out having quite low levels of radioactivity. The longer the fuel spends in an operating reactor the more highly radioactive fission products build up in it. Fuel that has been in the reactor for two years may be twice as radioactive as fuel that’s only been in the reactor for one year. So as the fuel becomes increasingly more spent it’s radioactivity increases. Once removed from the reactor the radioactivity decreases rapidly in an exponential curve so that its level of radioactivity may be considerably down a year later when the next load of partly irradiated fuel is removed from the reactor. A month after the removal of the recent fuel load, its radioactivity may still be ten times greater than the fuel that’s been aging in the spent fuel pool for a full year (Vernon Brechin, written communication, April 2011)

 

8Arjun Makihajani, Post-Tsunami Situation at the Fukushima Daiichi Nuclear Power Plant in Japan: Facts, Analysis, and Some Potential Outcomes, Institute for Energy and Environmental Research, 14 March 2011; Euan Mearns, Fukushima Dai-ichi Status and Potential Outcomes, The Oil Drum, 17 March 2011. http://www.theoildrum.com/node/7675#more

9Jenna Fisher, The Christian Science Monitor, TEPCO To Decommission Fukushima Reactors: Japan Nuclear Timeline, 30 March 2011. http://www.csmonitor.com/World/Asia-Pacific/2011/0315/TEPCO-to-decommission-Fukushima-reactors-Japan-nuclear-timeline; Fukushima 1 Nuclear Accidents, Wikipedia. http://en.wikipedia.org/wiki/Fukushima_I_nuclear_accidents

 

10Dave Lochbaum, Possible Cause of Reactor Building Explosions, Union of Concerned Scientists, 18 March 2011; Euan Mearns, Fukushima Dai-ichi Status and Pronosis, The Oil Drum, 31 March 2011, http://www.theoildrum.com/node/7722#more

11A description of the fuel rods disintegration in the reactor core as “catastrophic disintegration of the cladding structural integrity” rather than melting is given in Euan Mearns, Fukushima Dai-ichi Status and Potential Outcomes

 

12In testimony before Congress, Gregory B. Jaczko, Chairman, U.S. Nuclear Regulatory Commission, stated the Commission’s belief that there had been a hydrogen explosion in Reactor 4 [March 15] due to uncovering of fuel rods in the spent fuel pool. This destroyed the secondary containment. In the Commission’s opinion, the spent fuel pool was dry (http://www.nrc.gov/about-nrc/organization/commission/comm-gregory-jaczko/0317nrc-transcript-jaczko.pdf). Subsequently, TEPCO claimed that the spent fuel pools have been filled with water, but high radiation levels prevent access for verification. The vulnerability of spent fuel storage facilities is well known: Makhijani, Post-Tsunami Situation at the Fukushima Daiichi Nuclear Power Plant; National Research Council, Safety and Security of Commercial Spent Fuel Storage: Public Report, National Academies Press, 2006; Keith Bradsher and Hiroko Tabuchi, Danger of Spent Fuel Outweighs Reactor Threat, New York Times, 17 March 2011; Robert Alvarez, Safeguarding Spent Fuel Pools in the United States, Institute for Policy Studies, 21 March 2011

13Justin McCurry and Suzanne Goldenberg, Fukushima Soil Contains Plutonium Traces, According to Japanese Officials, The Guardian, 29 March 2011. The reported distinction between plutonium from atmospheric weapons testing and that originating in the MOX fuels of the power plant is apparently based on ratios of Pu-238/Pu-239, low in bomb tests, higher in MOX fuels (Vernon Brechin, written communication, 30 March 2011)

14Aerial surveys of radiation around Fukushima-I revealed hot spots as far as 36 miles from the plant with radiation levels that exceed international standards for immediate evacuation (Jim Smith, A Long Shadow Over Fukushima, Nature, 472 (7), 5 April 2011). Very high levels of cesium-137 will require evacuation for a very long period of time.

15Contamination of groundwater about 50 feet below the Fukushima-I reactors was reported on March 31, giving values for iodine-131 of 10,000 times the legal limit. TEPCO, the facility owner, reports on contaminant concentrations have been severely criticized; the company claims the iodine-131 value was checked and is correct, but is not sure of values for other contaminants. Since the location of the plant is so close to the ocean it is probable that groundwater in the surficial unconfined aquifer flows directly to the ocean. It is also likely that contaminants in the unsaturated zone in soils above the water table will migrate to the water table and then to the ocean over time. The rates of migration are not known (see Wilshire, H.G., Nielson, J.E., and Hazlett, R.W.,The American West at Risk: Science, Myths, and Politics of Land Abuse (New York, Oxford University Press, 2008),, Chapters 7, 13).

16World Nuclear News, Tepco’s Plans for Water Issues, 01 April 2011; Mari Yamaguchi and Yuri Kageyama, Search for Radiation Leak Turns Desperate in Japan, Associated Press, 04 April 2011

17Accidental restarting of nuclear fission chain reactions long after reactor shutdown may be causing formation of very short-lived chlorine-38 (half-life 37 minutes) by neutron absorption of stable chlorine-37 in seawater pumped into the reactors (F. Dalnoki-Veress, with an introduction by Arjun Makhijani, What Caused the High Chlorine-38 Radioactivity in the Fukushima Daiichi Reactor #1?, Asia-Pacific Journal , 30 March 2011, http://www.ieer.org/)

 

18Justin Elliott, Japan’s Nuclear Danger Explained, Salon, 18 March 2011

19Shigeru Sato, Yuji Okada and Tsuyoshi Inajima, Tepco’s Damaged Reactors May Take 30 Years, $12 Billion to Scrap, Financial News, 29 March 2011, http://hotstocksforyou.com/2011/03/tepcos-damaged-reactors-may-take-30-years-12-billion-to-scrap/; see also, Yamaguchi and Kageyama, Search for Radiation Leak, 04 April 2011

Nature Bats Last …

March 16, 2011

Here’s an updated list of the world’s largest earthquakes (there are 16 because I added Sendai to the USGS top-15 list. The size is given by the moment magnitude (MMagnitude), which measures the energy released by the main shock.

Many of us elders can recall the no. 2 quake in 1964, which struck Anchorage and wiped out the major port of Valdes.  It was the first really big quake to hit a part of the highly developed world. The magnitude originally given that quake was 8.9.  The magnitude scales have been extensively revised since then. And note that the Sendai quake was revised to a MMagnitude 9.0, now tied for 4th place, with the 1952 Kamchatka quake.

Rank Date Location MMagnitude
1. 1960 Chile 9.5
2. 1964 Prince William Sound, AK 9.2
3. 2004 W of N Sumatra Undersea 9.1
4. 1952 Kamchatka 9.0
4. 2011 Sendai, Japan 9.0
5. 2010 Maule, Chile Offshore 8.8
6. 1906 Coast of Ecuador Offshore 8.8
7. 1965 Rat Islands, AK 8.7
8. 2005 N Sumatra, Indonesia 8.6
9. 1950 Assam / Tibet 8.6
10. 1957 Andreanof Islands, AK 8.6
11. 2007 S Sumatra, Indonesia 8.5
12. 1938 Banda Sea, Indonesia 8.5
13. 1923 Kamchatka 8.5
14. 1922 Chile-Argentina Border 8.5
15. 1963 Kurile Islands 8.5

If you get out an atlas and examine the sites of these quakes, you’ll see that all are Pacific Rim locations.1 The Rat and Andreanof Islands are part of the Aleutian Islands.

Figure 1. Map of World Plate Boundaries

All are located above a subduction zone. The major ones are named on Figure 1, and are marked with jagged teeth on the map, top of Figure 2.2 At subduction zones, ocean crustal rocks (oceanic plate) are being shoved beneath an overriding mass of continental crustal rocks (continental plate), or beneath oceanic crustal rocks of a different tectonic plate (also oceanic plate), as shown in Figure 2.

Continental rocks also can be forced underneath a continental plate — for example, India — formerly attached to Antarctica — currently is at the end of a continental mass being forced against (and locally under) the huge continental mass of Eurasia.

The cramming of a mass of rocks under and against another mass of rocks is the most dramatic geologic example of the irresistible force meeting an immovable object. The 1950s song on that theme said “somethin’s gotta give” — and when that “somethin’” gives, we experience an earthquake. The bigger the opposing forces, the bigger the potential earthquake.

Figure 2. Plate Tectonics Process

Above the world’s subduction zones that involve at least one colliding ocean plate, lines of explosive volcanoes either rise directly from the ocean (Aleutians) or erupt through a nearby continent (Andes, Cascades) or island chain (Japan). That volcanic activity is caused when the subducting plate heats up enough to melt, as shown in the lower part of Figure 2.

Not all volcanoes are formed by subduction, however. Hawaii’s volcanoes, for example, do not lie above a subduction zone. They generally are not as explosive and have largely different average compositions than subduction zone volcanoes.

Plates form along volcanic ridges, generally under the sea, but also in Continental rifts, such as the Rift Valley of East Africa. These volcanic rift zones are another type of plate boundary.

For California residents, southern, central, and the lower part of northern California (south of Cape Mendocino) DO NOT lie above a subduction zone. Earthquakes in the coastal area are related to movement along the San Andreas Fault, where rocks of the Pacific tectonic plate are being pushed along (and against) the western edge of the North American tectonic plate (the North American plate is moving westward). This is the third type of plate boundary.

I won’t discuss any additional general details here, but many web sites provide clear and complete information. One of my favorites is the ETE site.

North of Cape Mendocino, there are several subduction zones, including one that generates the Cascades Range volcanoes. Geologic studies of coastal Washington State have shown a record of numerous prehistoric inundations, believed due to earthquakes along the subduction zone, where the Juan de Fuca plate, the remnant of the now-totally subducted East Pacific plate, is being forced beneath the North American tectonic plate.

So something like Sendai could happen along the coast of northernmost CA, Oregon, and Washington, and also along other subduction zones to the north and west, where Pacific plate rocks are being shoved under the western end of the North American plate (Figure 1), creating volcanoes in southern Canada, and southwestern Alaska, site of the world’s second largest earthquake, in 1964.

Pacific-under-North America subduction zones also lie under the Aleutian Islands (1957, 1965), Kamchatka (1923, 1952) and the Kurile Islands (1963) — and, yes northern Japan. The subduction under Japan is very complex. See more detail at this more professionally oriented website.

The double whammy of monster Sendai earthquake and tsunami is educating us to the vulnerabilities of a highly populated and developed subduction-zone coastline. President Obama’s statement that recovery would be easier for Japan than Haiti, due to its high level of development missed the fact that modern development separates populations from food production zones, and modern services depend on extensive, complex systems that collect and deliver energy and water resources, and that coordination of food and resources deliveries depend on long communication and travel lifelines.

The Sendai events also are validating those who opposed building nuclear plants along the San Andreas fault, and on the coast north of far northern California, and of Oregon, Washington State, and southern Canada, above the Juan de Fuca subduction zone. Alaska and the Aleutian Islands are also bad spots for siting nuclear reactors.

Of course, big offshore earthquakes generate tsunamis, and the Sendai quake’s first devastating tsunami is the main cause for such total devastation as we’re now seeing on the news. ALL of coastal California is an earthquake hazard zone, and also vulnerable to Pacific subduction zone tsunamis, as we can see from the damage to harbors at Crescent City, and Santa Cruz, CA. It is even more vulnerable to tsunamis from the Juan de Fuca subduction zone, and tsunamis that might be generated from the Aleutians subduction zone.

Both government and members of the public MUST scrutinize building proposals for extensive energy and water-delivery systems that lie in or would cross the many hazard-prone areas, with the images from northeastern Japan in mind. They illustrate like nothing else what Paul Hawken and co-authors meant when they wrote: “Nature bats last, and owns the stadium.”

1. When this website comes up, scroll down to see the map of plate tectonic boundaries.

2. On this website, scroll to the item “Super Earths Will Have Plate Tectonics.”

Bakken: Cold Facts on Shale Oil

February 21, 2011

Our blogs on the Bakken Formation of Montana and North Dakota, and other “tight” or “bound” petroleum-bearing units, have addressed the fantasy that such shale oil and gas sources will provide the United States (and in some versions, the WORLD) an unending energy bonanza. The unfavorable comments tend to contain put-downs, some couched in a rather personally antagonistic tone. This tends to happen when the commenting parties lack the data to back them up. The comments generally miss the mark, however.

We do NOT say that the U.S. has no extensive oil fields able to keep on producing petroleum for U.S. consumption, nor have we implied that some of the known fields are not currently economic. We do say that the aggregate production will not make up for the steep production declines at a majority of the oil fields that once fueled the U.S. economy and military. As a result, as long as the U.S. economy remains tied to petroleum, we will continue to rely on imports from Canada and beyond.

Industry and government data supporting our position are exhibited and discussed on many websites. We cite a few of them in what follows.

World Energy Consumption (horizontal scale) in billions of barrels, Compared to National Incomes (vertical scale) in billions of dollars

So what keeps Bakken and other shale oil sources from being our salvation? One part of the answer is that the U.S. is one of the world’s leading petroleum consumers. In our book we wrote (p 313): “By 2000 Americans represented less than 5% of the world’s population but consumed about one-third of its annual energy supplies.” To meet our demand requires finding new oil at a rate many times higher than the current rate of new U.S. petroleum discoveries. This “new finds” deficit (in terms of the estimated resource available in the newly discovered fields) has been developing since 1930, and grows larger with every passing year.

Another part of the answer involves the difference between “shale oil” petroleum resources, like Bakken, and the bonanza “conventional” oil resources of 100 years ago. In those early days, petroleum gushed out of relatively few holes drilled into very large underground pools, driven by dissolved gases. Many fewer holes were needed to extract most of the recoverable oil from those large pools of yesteryear than are now required to extract oil from Bakken.

Some of the shale oil is in small dispersed “pools,” but much of what’s there is attached (“bound”) to the rock and must be coaxed out the ground, using varied techniques such as “fracking.” The dispersed pools and need for injecting “fracking” fluids mean that the Bakken developers will drill many, many holes and use a lot of energy to get the petroleum out and process it into fuel. As a result, the NET ENERGY[1] for producing this oil will be a lot less per drill hole compared to even 40 years ago, when U.S. petroleum production was at its highest point.

There is no currently known domestic oil bonanza that will keep supplying current U.S. petro-thirst far into the future, and certainly no supply that will support unending consumption growth. There is equally no petroleum bonanza beyond U.S. borders that can supply unending growth in worldwide consumption.

As Kurt Cobb wrote in a Resource Insights article[2]: “In the United States alone the new process could mean 2 million barrels a day by 2015 from … fields once thought too difficult to develop ….” (But note that the U.S. consumes more than 20 million barrels of petroleum a day.) And Cobb continues: “[I]f … the projections are correct, then oil flows from tight oil in the United States will represent about 2 percent of world production in 2015. And if the more pessimistic estimates of the U.S. Energy Information Administration come closer to actual U.S. tight oil production in 2015, [it] will represent about 0.5 percent of world production. Neither amount is enough to move the price of oil ….”

But Cobb also notes, “There is reason, to doubt the claims … for tight oil supplies … beyond the fact that the companies making them are often publicly traded and therefore have incentive to manipulate their stock prices … The original shale gas promoters believed that natural gas would be uniformly available from the giant shale basins found in the United States. They were wrong. Only a few sweet spots have been profitable. As humans have done throughout the age of oil, tight oil developers will target the sweet spots first since they are the cheapest and easiest to exploit. Then, they’ll move on to areas that are progressively harder and thus more expensive to exploit. Over time tight oil won’t become easier to get; it’ll become harder to get just like shale gas.”

Gail Tverberg’s blog[3] has addressed the contention that advanced “new” techniques will add vast amounts to U.S. Geological Survey estimates of the Bakken resource, contrary to Bill Bergseid’s assertion in his recent comment on our Bakken blogs.

Says Gail: “… this is not really a new drilling technique … hydraulic fracturing was first used in the United States for oil and gas wells in 1947. It was first used commercially in 1949. Directional drilling, including horizontal drilling is almost as old, but … not widely used until downhill motors and semicontinuous surveying became possible. The techniques have gradually been refined …. A major reason we are using these techniques is because much of the easy-to-extract oil has already been extracted. Horizontal drilling and hydraulic fracturing are more expensive, but can be used to get out oil that would be inaccessible otherwise. The hope is that oil prices will be high enough to make these techniques profitable.”  At present, natural gas prices do not provide much profit.

Gail also agrees with Kurt Cobb about the potential for disappointing results: “There are several reasons why the hoped for [2 million barrels per day] might not be realized … [o]ne is … inadequate infrastructure [that could] prove to be a roadblock to meeting ambitious production goals … currently oil is being transported to market by rail and truck, and drilling companies have erected camps for workers. … What tends to happen when there isn’t adequate transportation for the oil is the selling price of the oil tends to be depressed, relative to other types …”

But then high oil prices “tend to send the economy into recession, so world prices may not rise as much as hoped–they may oscillate instead, rising, then putting the economy into recession and falling again …”

Too much optimism before drilling, such as now being spread on the web, also can be a trap — Gail again: “It is natural for those who are trying to get others to invest in these ventures to base their assumptions on an optimistic view of the future. If experience with shale gas in Texas is any clue, once realities start setting in, the level of drilling may decline, and overall production, after an initial run-up, may decline.”

U.S. Crude Oil Production since 1985

But here’s the bottom line — that is, the thin to very thin blue line at the bottom of the chart at left (Gail’s Figure 4). She explains: “If we look at a graph of countrywide US oil production, it has been decreasing prior to an uptick in 2009 and 2010. Bakken oil production (in ND +MT) is shown near the bottom of Figure 4. It appears as a thin blue line that was a bit thicker back in the late 1980s, became thinner for many years, and now is a bit thicker (reaching an average of about 370,000 barrels a day in 2010). Getting that line, or that line plus some other areas that are only starting up, to increase by 2 million barrels a day, to 2,370,000 per day by 2015, would be a tall order.”

At the same time, “US crude oil production has been headed downward for a long time–actually since 1970, not just since 1985 [as] shown on … Figure 4. If overall production is to … increase by 2 million barrels a day by 2015, it will be necessary to overcome these [other] declines, as well … What happens is that each year, more and more oil fields and oil wells within oil fields become non-economic. These are closed. Also, what is extracted is an oil-water mix, and the proportion of oil tends to fall over time. This means that if a given volume of oil-water mix is processed from a well, each year the well will yield less oil and more water.”[4]

References


[1] Net energy is also called EROI or EROEI (energy returned on energy invested).

[2] Kurt Cobb, The week of the game changer in oil, or was it? Resource Insights, Published Feb 13 2011 (Archived at http://resourceinsights.blogspot.com/2011/02/week-of-game-changer-in-oil-or-was-it.html)

[3] gailtheactuary (Gail Tverberg), Is “shale oil” the answer to “peak oil”? Our Finite World Posted on February 14, 2011 (http://ourfiniteworld.com/author/gailtheactuary/).

[4] gailtheactuary also contributes substantial commentary on The Oil Drum website (www.theoildrum.com).

Solar Power Plants, Water, and Climate

January 22, 2011

Solar Power Plants, Water, and Climate Change

This blog is a critique of environmental impact assessments for 17 solar power plant projects in the southwestern U.S. Thirteen of the projects are on the Department of the Interior’s fast track renewable energy developments list for public lands.1 CEQA (for projects in California) and NEPA environmental impact assessments were fast-tracked to meet the December 31, 2010 deadline for securing stimulus funding for these expensive projects.2 Data sources and annotated background information on the projects can be downloaded from our website’s Resources page as a pdf (see Endnote 2).

Whether or not enough water will be available for power plant projects in the arid southwest is a subject of controversy. The southwestern U.S.’s surface waters are already over-allocated and the various states of groundwater overdraft in many basins have not curtailed approval of further groundwater allocation for solar power plants. All of the solar projects must use water for construction in the short term, and for operations over the life of the plants. Air-cooled photovoltaic and heat-engine technologies use least, and solar thermal technologies use the most. Table 1 lists estimated water use in these categories and total use for the construction phases, which vary in duration.

Table 1. Solar Power Plant Summary of Plant Type and Projected Water Use

_________________________________________________________________________

Amargosa Farm Road. Parabolic trough, 464 MW capacity. Dry cooled, auxiliary equipment wet cooled. Operational** water use 400 acre-feet per year (afy). Construction, 39 month duration; water use, 1,950 af.

*Blythe Solar Project. Parabolic trough, 1000 MW capacity. Dry cooled, auxiliary equipment wet cooled. Operational water use 600 afy. Construction, 69 months duration; water use, 5,890 af.

*Genesis Solar Project. Parabolic trough, 250 MW capacity. Dry cooled. Auxiliary equipment wet cooling (no water use estimate given). Operational water use 218 afy total. Construction, 39 months duration; water use, 2,440 af

*Palen Solar Project. Parabolic trough, 500 MW capacity. Dry cooled, with auxiliary equipment wet cooled. Operational water use 300 afy. Construction, 39 months duration; water use, 1,500 af.

*Ridgecrest Solar Project. Parabolic trough, 250 MW capacity. Dry cooled, with auxiliary equipment wet cooled. Operational water use 150 afy. Construction, 28 months duration; water use, 1,470 af.

*Ivanpah Solar Project. Power tower, 400 MW capacity. Dry cooled, with auxiliary boiler operated during transient cloudy days or at night, water use not specified. Operational water use 100 afy. Construction, 72 months [based on assumed 6 work days/week]; water use, 2,255 af.

Rice Solar Project. Power tower, 150 MW capacity. Dry cooled. Operational water use 150 afy. Construction, 30 months duration; water use, 780 af.

*Sonoran Solar Project. Parabolic trough, 375 MW capacity. Wet cooled, 3,000 afy (assumes 25% energy production from gas co-firing); Operational water use for dry cooled alternative 150 afy (assumes 25% energy production from gas cofiring). Construction, 39 months duration, no water use estimate.

Abengoa Solar Project.  Parabolic trough, 250 MW capacity. Wet cooled. Operational water use 2,160 afy. Construction, no estimates available.

Beacon Solar Project. Parabolic trough, 250 MW capacity. Wet cooled. Operational water use 1,388 afy. Construction, 5 years duration; water use, 3,765 af.

Nevada One Solar Project. Parabolic trough, 64 MW capacity. Wet cooled. Operational water use ~400 afy. Construction, no duration or water use figures available.

*Crescent Dunes Solar Project. Power tower, 110 MW capacity. Hybrid wet/dry cooled. Operational water use 600 afy. Construction, 30 months duration; water use, 725 af.

*Imperial Valley Solar Project. Heat engine, 750 MW capacity. No generation cooling. Operational water use 33 afy. Construction, 39 months duration; water use, 166 af.

*Calico Solar Project. Heat engine, 850 MW capacity. No generation cooling. Operational water use 20 afy. Construction, 52 months duration; water use, 600 af. Rights to this land have been sold, may be used for PV installation.

*Desert Sunlight Solar Farm. Thin film PV, 550 MW capacity. No generation cooling. Operational water use 29 to 1,460 afy. Construction, 26 months duration; water use, 1,400 af

*Lucerne Valley Solar Project. Thin film PV, 45 MW capacity. No generation cooling. Operational water use 0.07 to 0.1 afy. Construction, 270 days duration; water use, 10 af.

*Silver State Solar, N & S.  Thin film PV, 327 MW capacity. No generation cooling. Operational water use 21 afy. Construction, 4 years duration; water use, 600 af.

__________________________________________________________________________

* Fast-track project

** Operational water use figures are given in acre feet per year (afy) for the life of the projnect, Construction uses are given in total acre feet (af) estimated to be used for the period of construction only.

Estimates of operational water consumption range from 100 to 600 afy for dry cooled solar thermal projects, from 400 to 3,000 afy for wet cooled solar thermal, and from 0.07 to 33 afy for heat engine and photovoltaic arrays (although one inexplicably ranges from 29 to 1,460 afy). Water use and project size are only slightly correlated for each type of plant.

Water use estimates for construction vary from 10 to 3,765 af for duration periods of 9 months to 6 years. Water use estimates for comparable projects vary so widely that many must be little more than guesswork, heavily influenced by project proposers. The public is not likely to see accurate figures until the projects have been in service over a substantial period. Wet cooling strategies clearly are far more water-consuming than dry cooled designs, but substantial amounts of groundwater likely will be consumed by both over the prolonged construction periods.

Wet cooling is preferred by project developers because it costs less to install and is more efficient than dry-cooling, but its use in water-scarce arid regions is discouraged both by agency and public pressure. Potentially significant operational problems might force greater reliance on wet cooling after these expensive power plants have been built, however.

The disadvantages of dry cooling include: higher capital costs (6 to 10 times the cost of wet cooling),3 higher auxiliary operating requirements (high energy use to operate fans and pumps),4 fan noise, and lower plant performance, especially on hot days, when the peak power is most in demand. Lower plant performance translates directly to higher electricity costs.

Model studies show about 5% lower performance for dry cooled parabolic trough plants, and under 2% for power tower plants. During hot periods, however, the performance penalties are more than triple: 17.6% for parabolic trough plants and 6.3% for power tower plants. Lowered generation of electricity can add significantly to the cost of the electricity produced.4 Efficiency penalties might be even greater: a technical study of hybrid air cooled power plants of the type used with geothermal sources and parabolic-trough solar thermal, discovered a 37% output reduction on hot days with air cooling than with wet (evaporative) cooling.5

A critical concern that is not assessed by any of the environmental documents I have reviewed is the potential impact of climate change on the operation of these solar facilities. The environmental assessments focus solely on the climate effects from greenhouse gas releases in plant construction and operation. Climate warming is already happening, as has been abundantly demonstrated in the scientific literature, and the predicted effects include extended drought in the southwestern U.S.6 Considering both the existing temperature and precipitation trends and the potential for abrupt climate change,7 it would be wise to assess the potential problems affecting the solar thermal power plants now being considered for installation. Prolonged hot periods are likely to bring pressure from plant operators to shift to wet cooling with a risk of depleting aquifers. Operation permits already allow night time make up of reduced solar insolation from transient cloudiness, but it is not clear that a shift to wet cooling could replace full days of hot weather. If permits do not cap permissible levels of water use, there may be trouble ahead.

Endnotes

1. U.S. Department of the Interior, Bureau of Land Management, Fast-Track Renewable Energy Projects, January 6, 2011. http://www.blm.gov/wo/st/en/prog/energy/renewable_energy/fast-track_renewable.html

2. Howard Wilshire, Fast-Tracking Solar Energy in the Desert, 2010 www.theamericanwestatrisk.com, click on Resources

3. EPRI, Palo Alto, CA, and California Energy Commission, Comparison of Alternate Cooling Technologies for California Power Plants: Economic, Environmental, and Other Tradeoffs, 2002, the initial capital costs of dry cooling systems exceed the costs of wet cooling systems by 6 to 10 times, and the fan power required for cooling is 4-6 times higher. Such penalties would substantially increase the costs of solar electricity.

4. U.S. Department of Energy, Concentrating Solar Power Commercial Application Study: Reducing Water Consumption of Concentrating Solar Power Electricity Generation, U.S. Department of Energy, Report to Congress [2008] http://www.nrel.gov/csp/pdfs/csp_water_study.pdf; U.S. Department of Energy, Estimating Freshwater Needs to Meet Future Thermoelectric Generation Requirements, 2008 Update, DOE/NETL-400/2008/1/339, 2008. http://www.netl.doe.gov/technologies/coalpower/ewr/pubs/2008_Water_Needs_Analysis-Final_10-2-2008.pdf

5. Written communication from John Rosenblum, Rosenblum Environmental Engineering,  November 30, 2010; Greg Mines, Evaluation of Hybrid Air-Cooled Flash/Binary Power Cycle, Idaho National Laboratory, October 2005

6. U.S. Global Change Research Program, Climate Change Impacts in the United States, A State of Knowledge Report from the U.S. Global Change Research Program, 2009; Richard Seager and G.A. Vecchi, Greenhouse Warming and the 21st Century Hydroclimate of Southwestern North America, Proceedings of the National Academy of Sciences, vol. 107, no. 50, 2010; Seth Shulman, Dust Bowl 2: Drought Detective Predicts Drier Future For American Southwest, Grist, 12 August 2010

7. U.S. Geological Survey, Abrupt Climate Change, Final Report, Synthesis and Assessment Product 3.4, U.S. Climate Change Science Program and the Subcommittee on Global Change Research, 2008; G.T. Narisma and others, Abrupt Changes in Rainfall During the Twentieth Century, Geophysical Research Letters, vol. 34, L06710, doi:10.1029/2006GL028628, 2007