Book: The Quest



The Quest





The Quest



Table of Contents

Title Page

Copyright Page

Introduction


PART ONE - The New World of Oil

Chapter 1 - RUSSIA RETURNS

Chapter 2 - THE CASPIAN DERBY

Chapter 3 - ACROSS THE CASPIAN

Chapter 4 - “SUPERMAJORS”

Chapter 5 - THE PETRO-STATE

Chapter 6 - AGGREGATE DISRUPTION

Chapter 7 - WAR IN IRAQ

Chapter 8 - THE DEMAND SHOCK

Chapter 9 - CHINA’S RISE

Chapter 10 - CHINA IN THE FAST LANE


PART TWO - Securing the Supply

Chapter 11 - IS THE WORLD RUNNING OUT OF OIL?

Chapter 12 - UNCONVENTIONAL

Chapter 13 - THE SECURITY OF ENERGY

Chapter 14 - SHIFTING SANDS IN THE PERSIAN GULF

Chapter 15 - GAS ON WATER

Chapter 16 - THE NATURAL GAS REVOLUTION


PART THREE - The Electric Age

Chapter 17 - ALTERNATING CURRENTS

Chapter 18 - THE NUCLEAR CYCLE

Chapter 19 - BREAKING THE BARGAIN

Chapter 20 - FUEL CHOICE


PART FOUR - Climate and Carbon

Chapter 21 - GLACIAL CHANGE

Chapter 22 - THE AGE OF DISCOVERY

Chapter 23 - THE ROAD TO RIO

Chapter 24 - MAKING A MARKET

Chapter 25 - ON THE GLOBAL AGENDA

Chapter 26 - IN SEARCH OF CONSENSUS


PART FIVE - New Energies

Chapter 27 - REBIRTH OF RENEWABLES

Chapter 28 - SCIENCE EXPERIMENT

Chapter 29 - ALCHEMY OF SHINING LIGHT

Chapter 30 - MYSTERY OF WIND

Chapter 31 - THE FIFTH FUEL—EFFICIENCY

Chapter 32 - CLOSING THE CONSERVATION GAP


PART SIX - Road to the Future

Chapter 33 - CARBOHYDRATE MAN

Chapter 34 - INTERNAL FIRE

Chapter 35 - THE GREAT ELECTRIC CAR EXPERIMENT


CONCLUSION: “A GREAT REVOLUTION”

Acknowledgements

CREDITS

NOTES

BIBLIOGRAPHY

INDEX

ALSO BY DANIEL YERGIN


ALSO BY DANIEL YERGIN

The Prize


Shattered Peace


Coauthored by Daniel Yergin


The Commanding Heights


Russia 2010


Global Insecurity


Energy Future


THE PENGUIN PRESS

Published by the Penguin Group

Penguin Group (USA) Inc., 375 Hudson Street, New York, New York 10014, U.S.A. • Penguin Group (Canada), 90 Eglinton Avenue East, Suite 700, Toronto, Ontario, Canada M4P 2Y3 (a division of Pearson Penguin Canada Inc.) • Penguin Books Ltd, 80 Strand, London WC2R 0RL, England • Penguin Ireland, 25 St. Stephen’s Green, Dublin 2, Ireland (a division of Penguin Books Ltd) • Penguin Books Australia Ltd, 250 Camberwell Road, Camberwell, Victoria 3124, Australia (a division of Pearson Australia Group Pty Ltd) • Penguin Books India Pvt Ltd, 11 Community Centre, Panchsheel Park, New Delhi–110 017, India • Penguin Group (NZ), 67 Apollo Drive, Rosedale, Auckland 0632, New Zealand (a division of Pearson New Zealand Ltd) • Penguin Books (South Africa) (Pty) Ltd, 24 Sturdee Avenue, Rosebank, Johannesburg 2196, South Africa


Penguin Books Ltd, Registered Offices:

80 Strand, London WC2R 0RL, England


First published in 2011 by The Penguin Press,


a member of Penguin Group (USA) Inc.



Copyright © Daniel Yergin, 2011


All rights reserved


Photograph credits appear on pages 722–23.


LIBRARY OF CONGRESS CATALOGING IN PUBLICATION DATA


Yergin, Daniel.

The quest :energy, security, and the remaking of the modern world / by Daniel Yergin. p. cm.

Includes bibliographical references and index.

ISBN : 978-1-101-56370-0

1. Power resources—Political aspects. 2. Money—Political aspects 3. Globalization. I. Title.

HD9502.A2Y47 2011

333.79—dc22

2011013100


MAPS BY VIRGINIA MASON

GRAPHICS BY SEAN MCNAUGHTON






Without limiting the rights under copyright reserved above, no part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form or by any means (electronic, mechanical, photocopying, recording or otherwise), without the prior written permission of both the copyright owner and the above publisher of this book.


The scanning, uploading, and distribution of this book via the Internet or via any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic piracy of copyrightable materials. Your support of the author’s rights is appreciated.


While the author has made every effort to provide accurate telephone numbers and Internet addresses at the time of publication, neither the publisher nor the author assumes any responsibility for errors, or for changes that occur after publication. Further, publisher does not have any control over and does not assume any responsibility for author or third-party Web sites or their content.

http://us.penguingroup.com


INTRODUCTION

They happened at the same time, halfway around the globe from each Bother. They both shook the world.



On March 11, 2011, at 2:46 in the afternoon Japan time, 17 miles below the seabed, the pressure between two vast tectonic plates created a massive violent upward force that set off one of the most powerful earthquakes ever recorded. In addition to widespread damage to buildings and infrastructure in the region north of Tokyo, the quake also knocked out the power supply, including that to the Fukushima Daiichi nuclear complex. Fifty-five minutes later, a huge tsunami unleashed by the quake swept over the coast, drowning thousands and thousands of people. At the Fukushima Daiichi complex, located at the very edge of the ocean, the massive tsunami surged above the seawall and flooded the power station, including its backup diesel generator, depriving the hot nuclear reactors of the cooling water required to keep them under control. In the days that followed, explosions damaged the plants, radiation was released, and severe meltdowns of nuclear rods occurred.

The result was the worst nuclear accident since the explosion at the Chernobyl nuclear plant in Soviet Ukraine a quarter century earlier. The Fukushima accident, compounded by damage to other electric generating plants in the area, led to power shortages, forcing rolling blackouts that demonstrated the vulnerability of modern society to a sudden shortage of energy supply. The effects were not limited to one country. The loss of industrial production in Japan disrupted global supply chains, halting automobile and electronics production in North America and Europe, and hitting the global economy. The accident at Fukushima threw a great question mark over the “global nuclear renaissance,” which many had thought essential to help meet the power needs of a growing world economy.

On the other side of the world, a very different kind of crisis was unfolding. It had been triggered a few months earlier not by the clash of tectonic plates, but by a young fruit seller in the Tunisian town of Sidi Bouzid. Frustrated by constant harassment by the town’s police and by the indifference of local officials, he doused himself with paint thinner and set himself aflame in protest in front of the city hall. His story and the ensuing demonstrations, transmitted by mobile phones, Internet, and satellite, whipped across Tunisia, the rest of North Africa, and the Middle East. In the face of swelling protests, the regime in Tunisia collapsed. And then, as protesters filled Tahrir Square in Cairo, so did the government in Egypt. Demonstrations against authoritarian governments spread across the entire region. In Libya, the protests turned into a civil war which drew in NATO.

The global oil price shot up in response not only to the loss of petroleum exports from Libya, but also to the disruption of the geostrategic balance that had underpinned the Middle East for decades. Anxiety mounted as to what the unrest might mean for the Persian Gulf, which supplies 40 percent of the oil sold into world markets, and for its customers around the globe.

These two very different but concurrent sets of events, oceans away from each other, delivered shocks to global markets. The renewed uncertainty and insecurity about energy, and the anticipation of deeper crisis, underscored a fundamental reality—how important energy is to the world.

This book tries to explain that importance. It is the story of the quest for the energy on which we so completely rely, for the position and rewards that accrue from energy, and for the security it affords. It is about how the modern energy world developed, about how concerns about climate and carbon are changing it, and about how different the energy world may be tomorrow.

Three fundamental questions shape this narrative: Will enough energy be available to meet the needs of a growing world and at what cost and with what technologies? How can the security of the energy system on which the world depends be protected? What will be the impact of environmental concerns, including climate change, on the future of energy—and how will energy development affect the environment?

As to the first, the fear of running out of energy has troubled people for a long time. One of the nineteenth century’s greatest scientists, William Thomson—better known as Lord Kelvin—warned in 1881, in his presidential address to the British Association for the Advancement of Science in Edinburgh, that Britain’s energy base was precarious and that disaster was impending. His fear was not about oil, but about coal, which had generated the “Age of Steam,” fueled Britain’s industrial preeminance, and made the words of “Rule, Britannia!” a reality in world power. Kelvin somberly warned that Britain’s days of greatness might be numbered because “the subterranean coal-stores of the world” were “becoming exhausted surely, and not slowly” and the day was drawing close when “so little of it is left.” The only hope he could offer was “that windmills or wind-motors in some form will again be in the ascendant.”

But in the years after Kelvin’s warning, the resource base of all hydrocarbons—coal, oil, and natural gas—continued to expand enormously.

Three quarters of a century after Kelvin’s address, the end of the “Fossil Fuel Age” was predicted by another formidable figure, Admiral Hyman Rickover, the “father of the nuclear navy” and, as much as any single person, the father of the nuclear power industry, and described once as “the greatest engineer of all time” by President Jimmy Carter.

“Today, coal, oil and natural gas supply 93 percent of the world’s energy,” Rickover declared in 1957. That was, he said, a “startling reversal” from just a century earlier, in 1850, when “fossil fuels supplied 5 percent of the world’s energy, and men and animals 94 percent.” This harnessing of energy was what made possible a standard of living far higher than that of the mid-nineteenth century. But Rickover’s central point was that fossil fuels would run out sometime after 2000—and most likely before 2050.

“Can we feel certain that when economically recoverable fossil fuels are gone science will have learned how to maintain a high standard of living on renewable energy sources?” the admiral asked. He was doubtful. He did not think that renewables—wind, sunlight, biomass—could ever get much above 15 percent of total energy. Nuclear power, though still experimental, might well replace coal in power plants. But, said Rickover, atomic-powered cars just were not in the cards. “It will be wise to face up to the possibility of the ultimate disappearance of automobiles,” he said. He put all of this in a strategic context: “High-energy consumption has always been a prerequisite of political power,” and he feared the perils that would come were that to change.

The resource endowment of the earth has turned out to be nowhere near as bleak as Rickover thought. Oil production today is five times greater than it was in 1957. Moreover, renewables have established a much more secure foundation than Rickover imagined. Yet we still live in what Rickover called the Fossil Fuel Age. Today, oil, coal, and natural gas provide over 80 percent of the world’s energy. Supplies may be much more abundant today than was ever imagined, but the challenge of assuring energy’s availability for the future is so much greater today than in Kelvin’s time, or even Rickover’s, owing to the simple arithmetic of scale. Will resources be adequate not only to fuel today’s $65 trillion global economy but also to fuel what might be a $130 trillion economy in just two decades? To put it simply, will the oil resources be sufficient to go from a world of almost a billion automobiles to a world of more than two billion cars?

The very fact that this question is asked reflects something new—the “globalization of energy demand.” Billions of people are becoming part of the global economy; and as they do so, their incomes and their use of energy go up. Currently, oil use in the developed world averages 14 barrels per person per year. In the developing world, it is only 3 barrels per person. How will the world cope when billions of people go from 3 barrels to 6 barrels per person?

The second theme of this book, security, arises from risk and vulnerability: the threat of interruption and crisis. Since World War II, many crises have disrupted energy supplies, usually unexpectedly.

Where will the next crisis come from? It could arise from what has been called the “bad new world” of cyber vulnerability. The complex systems that produce and deliver energy are among the most critical of all the “critical infrastructures,” and that makes their digital controls tempting targets for cyberattacks. Shutting down the electric power system could do more than cause blackouts; it could immobilize society. When it comes to the security of energy supplies, the analysis always seems to return to the Persian Gulf region, which holds 60 percent of conventional oil reserves. Iran’s nuclear program could upset the balance of power in that region. Terrorist networks have targeted its vast energy infrastructure to try to bring down existing governments and to drive up the price of oil and, in so doing, “bankrupt” the West. The region also confronts the turmoil arising from the dissatisfaction of a huge bulge of young people for whom education and employment opportunities are lacking and whose expectations are far from being met.

There are many other kinds of risks and dangers. It is an imperative to anticipate them, prepare for them, and ensure the resilience to respond—so as not to have to conclude after the fact, in the stark words of a Japanese government report on the Fukushima Daiichi disaster, that “consistent preparation” was “insufficient.”

In terms of the environment, the third theme, the enormous strides have been made to address traditional pollution concerns. But when people in earlier decades focused on pollutants coming out of the tailpipe, they were thinking about smog, not about CO2 and global warming. Environmental consciousness has expanded massively since the first Earth Day in 1970. In this century climate change has become a dominant political issue and central to the future of energy. This shift has turned greenhouse gases into a potent rationale for rolling back the supremacy of hydrocarbons and for expanding the role of renewables.

Yet most forecasts show that much of what will be the much larger energy needs two decades from now—75 to 80 percent—are currently on track to be met as they are today, from oil, gas, and coal, although used more efficiently. Or will the world shift toward what Lord Kelvin thought was needed and Admiral Rickover doubted was possible—a new age of energy, a radically different mix that relies much more heavily on renewables and alternatives—wind, solar, and biofuels, among others—perhaps even from sources that we cannot identify today? What kind of energy mix will meet the world’s energy needs without crisis and confrontation?

Whatever the answers, innovation will be critical. Perhaps not surprisingly, the emphasis on innovation across the energy spectrum is greater than ever before. That increases the likelihood of seeing the benefits from what General Georges Doriot, the founder of modern venture-capital investing, called “applied science” being successfully applied to energy.

The lead times may be long owing to the scale and complexity of the vast system that supplies energy, but if this is to be an era of energy transition, then the $6 trillion global energy market is “contestable.” That is, it is up for grabs among the incumbents—the oil, gas, and coal companies that supply the bulk of today’s energy—and the new entrants—such as wind, solar, and biofuels—that want to capture a growing share of those dollars. A transition on this scale, if it does happen, has great significance for emissions, for the wider economy, for geopolitics, and for the position of nations.



The first section of this book describes the new, more complex world of oil that has emerged in the decades since the Gulf War. The essential drama of oil—the struggle for access, the battle for control, the geopolitics that shape it—will continue to be a decisive factor for our changing world. China, which two decades ago hardly figured in the global energy equation, is central to this new world. This is true not only because it is the manufacturing “workshop of the world,” but also because of the “build-out of China”—the massive national construction project that is accommodating the 20 million people who are moving each year from rural areas into cities.

Part II centers on energy security and the future of supply. Will the world “run out” of oil? If not, where will it come from? The new supply will include natural gas, with its growing importance for the global economy. The rapid expansion of liquefied natural gas is creating another global energy market. Shale gas, the biggest energy innovation since the start of the new century, has turned what was an imminent shortage in the United States into what may be a hundred-year supply and may do the same elsewhere in the world. It is dramatically changing the competitive positions for everything from nuclear energy to wind power. It has also stoked, in a remarkably short time, a new environmental debate.

Part III is about the age of electricity. Ever since Thomas Edison fired up his power station in Lower Manhattan, the world has become progressively more electrified. In the developed world, electricity is taken for granted and yet the world cannot operate without it. For developing countries, shortages of electricity take their toll on people’s lives and on economic growth.

Today, a host of new devices and gadgets that did not exist three decades ago—from personal computers and DVD players to smart phones and tablets— all require increasing supplies of electricity—what might be called “gadgiwatts.” Meeting future needs for electricity means facing challenging and sometimes wrenching decisions about the choice of fuel that will be required to keep the lights on and the power flowing.

Part IV tells the little-known story of how climate change, a subject of interest to a handful of scientists, became one of the dominating questions for the future. The study of climate began in the Alps in the 1770s out of sheer curiosity. In the nineteenth century, a few scientists began to think systematically about climate, but not because they were worried about global warming. Rather, they feared the return of an ice age. Only in the late 1950s and 1960s did a few researchers begin to calculate rising levels of carbon in the atmosphere and calibrate what that might mean for rising temperatures. The risk, they concluded, was not global cooling but global warming. But it was only in the twenty-first century that climate change as an issue started to have major effects on decisions by political leaders, CEOs, and investors—and even became a subject to be ruled upon by the U.S. Supreme Court.

Part V describes the new energies—the “rebirth of renewables”—and the evolution of technology. The history of the renewable industries is one of innovation, entrepreneurial daring, political battles, controversy, disappointment and despair, recovery and luck. They have become large global industries in themselves, but they are also reaching a testing point to demonstrate whether they can attain large-scale commerciality.

There is one key energy source that most people do not think of as an energy source. Sometimes it is called conservation; sometimes efficiency. It is hard to conceptualize and hard to mobilize and yet it can make the biggest contribution of all to the energy balance in the years immediately ahead.

The themes converge in Part VI on transportation and the automobile. It had seemed absolutely clear that the race for the mass-market automobile was decided almost exactly a century ago, with an overwhelming victory by the internal-combustion engine. But the return of the electric car—in this case fueled not only by its battery but also by government policies—is restarting the race. But will all-out electrification win this time ? If the electric car proves itself competitive, or at least competitive in some circumstances, that outcome will reshape the energy world. That is not the only competitor. The race is also on to develop biofuels—to “grow” oil, rather than drill for it. All this sets a very big question: Can the electric car or biofuels depose petroleum from its position as king of the realm of transportation?

We can be sure that, in the years ahead, new “surprises” will upset whatever is the current consensus, change perspectives, redirect both policy and investment, and affect international relations. These surprises may be shocks of one kind or another—from political upheavals, wars or terrorism, or abrupt changes in the economy. Or they could be the result of accidents or of nature’s fury. Or they could be the consequence of unanticipated technological breakthroughs that open up new opportunities.

But of one thing we can be pretty certain: The world’s appetite for energy in the years ahead will grow enormously. The absolute numbers are staggering. Whatever the mix in the years ahead, energy and its challenges will be defining for our future.




PROLOGUE

I raqi troops and tanks had been massing ominously for several days on the border with Kuwait. But Saddam Hussein, Iraq’s dictator, assured various Middle Eastern leaders that they need not worry, that his intentions were peaceful, and that matters would get settled. “Nothing will happen,” he said to Jordan’s king. He told Egypt’s president that he had no intention of invading Kuwait. To the U.S. ambassador, summoned on short notice, he raged that Kuwait, along with the United Arab Emirates, was waging “economic warfare” against Iraq. They were producing too much oil and, thus, driving down the price of oil, said Hussein—the results for Iraq, he added, were unbearable, and Iraq would have to “respond.” The U.S. ambassador, citing Iraqi troop movements, asked “the simple question—what are your intentions?” Hussein said that he was pursuing a diplomatic resolution. The ambassador replied that the United States would “never excuse settlement of disputes by other than peaceful means.” At the end of the meeting, Saddam told the ambassador that she should go on vacation and not to worry.1

However, a week later, in the early morning hours of August 2, 1990, Iraqi forces moved across the border and proceeded, with great brutality, to seize control of Kuwait. The result would be the first crisis of the post–Cold War world. It would also open a new era for world oil supplies.

Iraq proffered many rationales for the invasion. Whatever the justifications, the objective was clear: Saddam Hussein intended to annex Kuwait and remove it from the map. An Iraq that subsumed Kuwait would rival Saudi Arabia as an oil power, with far-reaching impact for the rest of the world.



“NOT SO FAST”

In the morning on August 2, Washington, D.C., time, President George H. W. Bush met with his National Security Council in the Cabinet Room at the White House. The mood was grim. The peace and stability so many around the world had hoped for was now suddenly and unexpectedly threatened. Just eight months earlier, the Berlin Wall had fallen, signaling the end of the Cold War. The key nations still had their hands full trying to peacefully wind down that four-and-a-half-decade confrontation.

With the annexation of Kuwait, Iraq would be in a position to assert its sway over the Persian Gulf, which at the time held two thirds of the world’s reserves. Saddam already had the fourth-largest army, in number of soldiers, in the world. Now Iraq would also be an oil superpower. Saddam would use the combined oil reserves, and the revenues that would flow from them, to acquire formidable arsenals, including nuclear and chemical weapons; and, with this new strength, Iraq could project its influence and power far beyond the Persian Gulf. In short, with this invasion and annexation, Iraq could rewrite the calculations of world politics. Allowing that to happen would run counter to four decades of U.S. policy, going back to President Harry Truman, aimed at maintaining the security of the Persian Gulf.

The discussion in the Cabinet Room on August 2, perhaps reflecting the initial shock, was unformed and unfocused. Much of it seemed to turn toward various forms of economic sanctions, almost as though adjusting to a new reality. Or at least it seemed that way to some in the room, including President Bush himself, who was “appalled,” as he put it, at the “huge gap between those who saw what was happening as the major crisis of our time and those who treated it as the crisis du jour.”

“ We will have to get used to a Kuwait-less world,” said one adviser, acknowledging what seemed to be a fait accompli.

Bush raised up his hand.

“Not so fast,” he said.2



DESERT STORM

Thereafter unfolded an extraordinary enterprise in coalition building—with some 36 nations signing on, in the form of either troops or money, under the auspices of the United Nations. The coalition included Saudi Arabia, whose largest oil field was only 250 miles from its border with Kuwait and whose ruler, King Fahd, told Bush that Saddam was “conceited and crazy” and that “he is following Hitler in creating world problems.” It also included the Soviet Union, whose president, Mikhail Gorbachev, said something that would have been unthinkable only a couple of years earlier—that the Soviet Union would stand “shoulder to shoulder” with the United States in the crisis.3

Over the six months that followed, a coalition force steadily and methodically assembled in northern Saudi Arabia until it numbered almost a million strong. In the very early predawn hours of January 17, Operation Desert Storm commenced its first phase, with aerial bombardment of Iraqi military targets. On January 23, the Iraqis opened the valves on Kuwait’s Sea Island Oil Terminal, releasing upwards of six million barrels of oil into the Persian Gulf, the largest oil spill in history, in an effort to foil what they expected to be an offensive from the sea by U.S. Marines. A month later, on February 23, coalition forces liberated Kuwait City. The next day, the coalition forces swept north from Saudi Arabia into Iraq, throwing back the Iraqi army. The invasion from the sea turned out to be a feint. The actual ground war took no more than a hundred hours, and it ended with Iraqi forces in full retreat.

But if Hussein could not have Kuwait, he would try to destroy it. Hussein’s soldiers left Kuwait burning. Almost eight hundred oil wells were set aflame, with temperatures as high as three thousand degrees, creating a hellish mixture of fire and darkness and choking smoke and gross environmental damage. As much as six million barrels of oil a day were going up in flames—much more than Kuwait’s normal daily production and considerably more than Japan’s daily oil imports. The scale of this inferno was so much bigger than anything that even the most experienced oil-well fire-fighting firms had ever seen, and a host of new techniques had to be quickly developed. The last of the fires was put out in November 1991.

In the aftermath of the war, Saddam was boxed in; it seemed only a matter of time before the Iraqi dictator, weakened and humiliated, would be toppled by internal opponents.



A NEW AGE OF GLOBALIZATION

The outcome of the First Gulf War was a landmark for what was expected to be a more peaceful era—what, for a time, was called a new world order. The Soviet Union was no longer an adversary of the West. At the end of 1991, the Soviet Union disintegrated altogether. The talk was now of a new “unipolar world” in which the United States would be not only the “indispensable nation” but also the world’s only superpower.

A new age of globalization followed: economies became more integrated and nations, more interconnected. “Privatization” and “deregulation,” which had begun in the 1970s and gained momentum in the 1980s, became the watchwords around the world. Governments were progressively giving up the “commanding heights”—that is, control of the strategic sectors of their economies. Nations instead put increasing confidence in markets, private initiative, and global capital flows.

In 1991 India began the first phase of reforms that would unshackle its economy and eventually turn it into a high-growth nation and an increasingly important part of the global economy.

In the energy sectors of countries, as in so many other sectors, traditional government ministries were turned into state-owned companies, which in turn were partly or entirely privatized. Now many of these ministries-turnedcompanies worried as much about what pension funds and other shareholders thought as about the plans of government civil servants.

International barriers of all kinds came down. With the Iron Curtain gone, Europe was no longer divided between East and West. The European Community turned into a much more integrated European Union and established the principle of the euro as its currency. A series of major initiatives—notably, the North American Free Trade Agreement—promoted freer trade. Overall, global trade grew faster than the global economy itself. Developing nations morphed into emerging markets and became the fastest-growing countries. Their rising incomes meant growing demand for oil.

Technology also drove globalization—in particular, the rapid development of information technology, the rise of the Internet, and the dramatic fall in the costs of international communications. This was changing the way firms operated, and it was connecting people in ways that had been inconceivable just a decade earlier. The “global village,” a speculative concept in the 1960s, was now quickly becoming a reality. The oil and gas industry was caught up in these revolutions. Geopolitical change and greater confidence in markets opened new areas to investment and exploration. The industry expanded its capacity to find and produce resources in more challenging environments. It seemed now that an age of inexpensive oil and natural gas would extend much further into the future. That would be good news for energy supply but not such good news for higher-priced alternatives.



THE FADING OF RENEWABLES?

The energy crises of the 1970s had combined with rising environmental consciousness to give birth to a range of new energy options, known first as “alternative energy” and then, more lastingly, as “renewables.” They covered a wide range—wind, solar, biomass, geothermal, etc. What gave them a common definition was that they were based neither on fossil fuels nor on nuclear power.

They had emerged out of the tumult of the 1970s with a great deal of enthusiasm—“rays of hope” in a famous formulation. But over the 1980s, the hopes had been dulled by the realities of falling costs of conventional energy, their own challenging economics, technological immaturity, and disappointment in deployment. With moderate prices and the apparent restoration of energy stability in the early 1990s, the prospects for renewable energy became even more challenging.

Yet environmental consciousness was becoming more pervasive. Most environmental issues were, traditionally, local or regional. But there was growing attention to a new kind of environmental issue, a global issue: climate change and global warming. Attention was initially confined to a relatively small segment of people. That would change in due course, with profound implications for the energy industry—conventional, renewable, and alternatives.

In other ways, the combination of energy policies launched in the 1970s and the dynamics of the marketplace had worked. In the face of much skepticism, energy efficiency—conservation—had turned out to be a much more vigorous contributor to the energy mix than most had anticipated.



A STABLE MIDDLE EAST

Mideast politics, which so often bedeviled security of supply, was no longer a threat. In the decade that followed the Gulf crisis, it seemed that the Middle East was more stable and that oil crises and disruptions were things of the past. No longer was there a Soviet Union to meddle in regional politics, and the outcome of the Gulf crisis and the weight of the United States in world affairs looked like an almost sure guarantee of stability.

The Palestinian Liberation Organization realized that it had driven itself into a dead end by supporting Saddam in the Gulf crisis, and, in the process, alienating many of the Arab countries that were its financial benefactors. It quickly reoriented itself, and swift progress thereupon followed in the Israeli-Palestinian peace process. In Washington, D.C., in September 1993, Yasser Arafat, chairman of the Palestinian National Authority, and Israel’s prime minister, Yitzhak Rabin, signed the Oslo Accords, which laid out the route to a two-state solution to that long conflict. And then, standing in front of President Clinton with the White House as a backdrop, they did what would have seemed inconceivable three years earlier—shook hands. The following year, they shared the Nobel Peace Prize along with Israel’s foreign minister, Shimon Peres. All this was a positive and powerful indicator of the world that seemed to be ahead. It might not have happened had Saddam not gone to war.

As for Saddam Hussein himself, he no longer seemed to be going anywhere.



CONTAINMENT

In 1991 the coalition’s forces had stopped 90 miles short of Baghdad. The coalition had come together under the authority of the United Nations to eject Saddam from Kuwait; it had no mandate to remove Saddam and change the regime. Nor was there any desire to engage in the potentially bloody urban warfare that would be required for a final push. As it was, the television images of the destruction of the Iraqi army, and the backlash those images were engendering, were in themselves a further reason to call things to a halt—what has been dubbed the “CNN effect.” Beyond all that, it was widely assumed that aggrieved elements of the Iraqi military would do what was expected—launch a coup—and that Saddam’s days were numbered. But, such was his ruthlessness and iron control, that, contrary to expectations, he held tightly to power after the war.

Yet Saddam’s position was much reduced. For Iraq was now hemmed in by a program of inspections, military force, and sanctions that amounted to what has been called “classic containment,” evoking the policy that had checked Soviet expansion during the Cold War. In addition, some efforts were mounted over the next few years to support Saddam’s opponents in toppling him, but that all ended in failure. Under the administration of Bill Clinton, the containment policy became more explicit. It also became conjoined with what now was described as “dual containment”—of Iran along with Iraq.

In principle, U.N. weapons inspectors could range freely around Iraq, looking for the elements that could go into weapons of mass destruction—colloquially known as WMD. In practice, obstructions were constantly put in the inspectors’ way. There was only one moment of surprising cooperation: In 1995 the head of Iraq’s unconventional weapons program, who happened to be Saddam’s son-in-law, defected to Jordan. The regime panicked, fearing what he might tell. Trying to preempt any revelations, Baghdad suddenly released half a million documents (which had been hidden in a chicken coop) that detailed production of a variety of biological weapons. But after Saddam lured his son-in-law back to Iraq (in order to have him killed), obstruction once again returned as the norm.4

Still, the days of Saddam’s capacity to try to control world oil had passed. His continuing impact on oil came mainly in the form of his ability to manipulate prices at the margins. In the first few years after the Gulf War, with exports not permitted, petroleum output fell precipitously. In 1995 the United Nations established the Oil-for-Food Programme, which allowed Iraq to sell a defined amount of oil. Half of the revenues went for essentials, like medicine and food. Before Saddam seized power, Iraq had been an exporter of food to Europe and even shipped dates to the United States. But, under Saddam, agriculture had suffered, and oil exports provided the funding to import the food the country now required. The other half went to reparations and to fund the U.N. inspections. Thereafter Iraqi production recovered to something over two million barrels per day, with significant output smuggled into Jordan, Syria, and Iran. In addition, Saddam’s regime benefitted from billions of dollars of secret kickbacks from those who had been granted contracts to sell Iraqi oil, ranging from mysterious Russian middlemen to a Texas oil tycoon to officials from countries seen as friendly to Iraq.5

But the program always seemed at risk. Would Saddam continue to cooperate with the U.N. program this time? Or would he break off cooperation, reducing or cutting off altogether Iraqi exports—thus abruptly sending the price up? The uncertainty created considerable price volatility.

By the end of the 1990s, the U.S. policy of containment was clearly fraying. Sentiment was growing in the Middle East and Europe that the sanctions were hurting not Saddam and his clique, and the Republican Guard that kept them in power, but the general Iraqi population. In 1998 Saddam permanently expelled the U.N. weapons inspectors. A 1998 U.S. National Intelligence Estimate concluded that Saddam’s ambitions for weapons of mass destruction were unchecked.6

Yet Saddam had been contained, and it appeared that he would never again be able to renew his bid to control the Persian Gulf. Next door in Iran, in 1997, Mohammad Khatami, regarded as a reformer and a relative moderate, was elected president, and there seemed a possibility to reduce the mutual hostility that had so dominated relations between Washington and Tehran. With all these changes, Middle East petroleum now appeared much more secure—and that meant that the world’s oil supply was more secure. Given this stability, it was thought that the price would circle around $20 or so a barrel. For American motorists, that meant relatively low gasoline prices, which they assumed were part of the natural order.



NEW HORIZONS AND THE “QUIET REVOLUTION”

At the same time, technology was increasing the security of oil supplies in a different way—by expanding the range of the drill bit and increasing recoverable reserves. The petroleum industry was going through a period of innovation, capitalizing on the advances in communications, computers, and information technology to find resources and develop them, whether on land or farther and farther out into the sea.

So often, over the history of the oil industry, it is said that technology has gone about as far as it can and that the “end of the road” for the oil industry is in sight. And then, new innovations dramatically expand capabilities. This pattern would be repeated again and again.

The rapid advances in microprocessing made possible the analysis of vastly more data, enabling geophysicists to greatly improve their interpretation of underground structures and thus improve exploration success. Enhanced computing power meant that the seismic mapping of the underground structures—the strata, the faults, the cap rocks, the traps—could now be done in three dimensions, rather than two. This 3-D seismic mapping, though far from infallible, enabled explorationists to much improve their understanding of the geology deep underground.

The second advance was the advent of horizontal drilling. Instead of the traditional vertical well that went straight down, wells could now be drilled vertically for the first few thousand feet and then driven at an angle or even sideways with drilling progress tightly controlled and measured every few feet with very sophisticated tools. This meant that much more of the reservoir could be accessed, thus increasing production.

The third breakthrough was the development of software and computer visualization that was becoming standard throughout the construction and engineering industries. Applied to the oil industry, this CAD/CAM (computer-aided design, computer-aided manufacturing) technology enabled a billiondollar offshore production platform to be designed down to the tiniest detail on a computer screen, and its resilience and efficiency tested in multiple ways, even before welding began on the first piece of steel.

As the 1990s progressed, the spread of information and communications technology and the extraordinary fall in communication costs meant that geoscientists could work as virtual teams in different parts of the world. Experience and learning from a field in one part of the world could instantly be shared with those trying to solve similar problems in analogous fields in other parts of the world. As a result, the CEO of one company said at the time with only some exaggeration, scientists and engineers “would go up the learning curve only once.”

These and other technological advances meant that companies could do things that had only recently been unattainable—whether in terms of identifying new prospects, tackling fields that could not be developed before, taking on much more complex projects, recovering more oil, or opening up entirely new production provinces.

Altogether, technology widened the horizons of world oil, bringing on large amounts of new supplies that supported economic growth and expanded mobility around the world. Billions of barrels of oil that could not have been accessed or produced a decade earlier were now within reach. All that proved to be “just in time” technological progress. For the world appeared to be on a fast track in terms of economic growth—and, thus, in its need for more oil.



The world was also changing fast in terms of geopolitics. Countries that had been closed or restrictive toward investment by international companies were now opening up, inviting the companies to bring their skills and technology along with their money. The seemingly immutable structure of global confrontation had suddenly buckled.

In particular, changes were unfolding in the successor states to the Soviet Union—Russia and the newly independent countries around the Caspian Sea—that would integrate the region with global markets. It was as if the twentieth century’s end was being reconnected back to the century’s beginning. The effect would be to broaden the foundations of the world petroleum supply. As an article in Foreign Affairs put it in 1993, “Oil is truly a global business for the first time since the barricades went up with the Bolshevik Revolution.”7

This observation had particular significance for Russia, the country that had been home of the Bolshevik Revolution, and that now rivaled Saudi Arabia in its capacity to produce oil.


PART ONE

The New World of Oil


1

RUSSIA RETURNS

On the night of December 25, 1991, Soviet president Mikhail Gorbachev went on national television to make a startling announcement—one that would have been almost unimaginable even a year or two earlier: “I hereby discontinue my activities at the post of the President of the Union of Soviet Socialist Republics.” And, he added, the Soviet Union would shortly cease to exist.

“We have a lot of everything—land, oil and gas and other natural resources—and there was talent and intellect in abundance,” he continued. “However, we were living much worse than people in the industrialized countries were living and we were increasingly lagging behind them.” He had tried to implement reforms but he had run out of time. A few months earlier, diehard communists had tried to stage a coup but failed. The coup had, however, set in motion the final disintegration. “The old system fell apart even before the new system began to work,” he said.

“Of course,” he added, “there were mistakes made that could have been avoided, and many of the things that we did could have been done better.” But he would not give up hope. “Some day our common efforts will bear fruit and our nations will live in a prosperous, democratic society.” He concluded simply, “I wish everyone all the best.”1

With that, he faded out into the ether and uncertainty of the night.

His whole speech had taken just twelve minutes. That was it. After seven decades, communism was finished in the land in which it had been born.

Six days later, on December 31, the USSR, the Union of Soviet Socialist Republics, formally ceased to exist. Mikhail Gorbachev, the last president of the Soviet Union, handed over the “football”—the suitcase with the codes to activate the Soviet nuclear arsenal—to Boris Yeltsin, the first president of the Russian Federation. There were no ringing of bells, no honking of horns, to mark this great transition. Just a stunned and muted—and disbelieving—response. The Soviet Union, a global superpower, was gone. The successors would be fifteen states, ranging in size from the huge Russian Federation to tiny Estonia. Russia was, by far, the first among equals: it was the legatee of the old Soviet Union; it inherited not only the nuclear codes, but the ministries and the debts of the USSR. What had been the closed Soviet Union was now, to one degree or another, open to the world. That, among other things, would redraw the map of world oil.

Among the tens of millions who had watched Gorbachev’s television farewell on December 25 was Valery Graifer. To Graifer, the collapse of the Soviet Union was nothing less than “a catastrophe, a real catastrophe.” For half a decade, he had been at the very center of the Soviet oil and gas industry. He had led the giant West Siberia operation, the last great industrial achievement of the Soviet system. Graifer had been sent there in the mid-1980s, when production had begun faltering, to restore output and push it higher. Under him, West Siberia had reached 8 million barrels per day—almost rivaling Saudi Arabia’s total output. The scale of the enterprise was enormous: some 450,000 people ultimately reported up to him. And yet West Siberia was part of an even bigger Soviet industry. “It was one big oil family throughout all the republics of the Soviet Union,” he later said. “If anyone had told me that this family was about to collapse, I would have laughed.” But the shock of the collapse wore off, and within a year he had launched a technology company to serve whatever would be the new oil industry of independent Russia. “We had a tough time,” he said. “But I saw that life goes on.”2



“THINGS ARE BAD WITH BREAD”

One of the lasting ironies of the Soviet Union was that while the communist system was almost synonymous with force-paced industrialization, its economy in its final decades was so heavily dependent on vast natural resources—oil and gas in particular.

The economic system that Joseph Stalin had imposed on the Soviet Union was grounded in central planning, five-year plans, and self-sufficiency—what Stalin called, “socialism in one country.” The USSR was largely shut off from the world economy. It was only in the 1960s that the Soviet Union reemerged on the world market as a significant exporter of oil and then, in the 1970s, of natural gas. “Crude oil along with other natural resources were,” as one Russian oil leader later said, “nearly the single existing link of the Soviet Union to the world” for “earning the hard currency so desperately needed by this largely isolated country.”3

By the end of the 1960s, the Soviet economy was showing signs of decay and incapacity to maintain economic growth. But, as a significant oil exporter, it received a huge windfall from the 1973 October War and the Arab oil embargo: the quadrupling of oil prices. The economy further benefitted in the early 1980s when oil prices doubled in response to the Iranian Revolution. This surge in oil revenues helped keep the enfeebled Soviet economy going for another decade, enabling the country to finance its superpower military status and meet other urgent needs.



At the top of the list of these needs were the food imports required, because of its endemic agricultural crisis, in order to avert acute shortages, even famine, and social instability. Sometimes the threat of food shortages was so imminent that Soviet premier Alexei Kosygin would call the head of oil and gas production and tell him, “Things are bad with bread. Give me three million tons [of oil] over the plan.”

Economist Yegor Gaidar, acting Russian prime minister in 1992, summed up the impact of these oil price increases: “The hard currency from oil exports stopped the growing food supply crisis, increased the import of equipment and consumer goods, ensured a financial basis for the arms race and the achievement of nuclear parity with the United States and permitted the realization of such risky foreign policy actions as the war in Afghanistan.”4

The increase in prices also allowed the Soviet Union to go on without reforming its economy or altering its foreign policy. Trapped by its own inertia the Soviet leadership failed to give serious consideration to the thought that oil prices might fall someday, let alone prepare for such an eventuality.



“DEAR JOHN—HELP!”

Mikhail Gorbachev came to power in 1985 determined to modernize both the economy and the political system without overturning either. “We knew what kind of country we had,” he would say. “It was the most militarized, the most centralized, the most rigidly disciplined; it was stuffed with nuclear weapons and other weapons.”

An issue that infuriated him when he came into office—women’s pantyhose—symbolized to him what was so wrong. “We were planning to create a commission headed by the secretary of the Central Committee . . . to solve the problem of women’s pantyhose,” he said. “Imagine a country that flies into space, launches Sputniks, creates such a defense system, and it can’t resolve the problem of women’s pantyhose. There’s no toothpaste, no soap powder, not the basic necessities of life. It was incredible and humiliating to work in such a government.”

But Gorbachev had very bad luck in timing. In 1986, one year after his ascension, oversupply and reduced demand on the world petroleum market triggered a huge collapse in the oil price. This drastically reduced the hard currency earnings that the country needed to pay for imports.

Even though the Soviet oil industry—which was now centered in West Siberia—continued to push up output, it was not enough to bail out the sinking economy. At the same time, Gorbachev was relaxing the grasp of communist repression on the society.5

While the collapse in oil prices was the “final blow,” as Yegor Gaidar has written, the failure was of the system itself. “The collapse of the Soviet system,” he said, “had been preordained by the fundamental characteristics of the Soviet economic and political system,” which “did not permit the country to adapt to the challenges of world development in the late twentieth century. “High oil prices was not a dependable foundation for preserving the last empire.”



By the end of the 1980s and the beginning of the 1990s, the word “crisis” in government and party documents was being replaced by “acute crisis,” and then by “catastrophe.” Food shortages were severe. At one point, the city of St. Petersburg nearly ran out of dairy products for children.

In November 1991, Gorbachev asked one of his aides to send British prime minister John Major, at that time head of the G7 group of industrial nations, a three-word message—“Dear John, Help!”6

It was just a month later that Gorbachev went on television to announce the dissolution of the Soviet Union.



A NEW RUSSIA : “NO ONE’S AT THE CONTROLS”

From January 1, 1992, Russia was an independent state, a huge one, traversing eleven time zones. The centrally planned socialist economy of the Soviet Union, where virtually every action in the entire economy was the result of bureaucratic decisions, had disintegrated, leaving economic chaos and uncertainty. There was no rule of commercial law, no basis for contracts, no established channels or rules for trade. Barter became the order of the day, not just for newly emerging traders and merchants out on the streets or working out of their apartments, but also factories, which traded goods and output back and forth as though it were all currency. It was also a free-for-all, a mad scramble, as most of the commercial assets of the state and of the narod—the Soviet people—were now up in play. It was a frightening time for the populace and a time of great hardship: their pensions and salaries, if paid at all, lost their value; and the low, but guaranteed, level of economic security on which they counted was disappearing before their eyes.

It was also frightening for the young reformers who came to power under Russian president Boris Yeltsin. “A nuclear superpower was in anarchy,” said Gaidar, who was Yeltsin’s first finance minister. “We had no money, no gold, and no grain to last through the next harvest, and there was no way to generate a solution. It was like travelling in a jet and you go into the cockpit and you discover that there’s no one at the controls.” The reformers couldn’t even get into government computers because the passwords had been lost during the collapse.

There were two urgent needs in those days. One was to stabilize the economy, renew the flow of goods and services, keep people fed and warm, and establish foundations for trade and a market economy. The other was to figure out what to do with all the factories and enterprises and resources—the means of production that the government owned—and somehow move them into some other form of ownership—private ownership, which was more productive and appropriate to a market economy. Since the state owned most everything, it meant that all the assets of the Soviet Union were up for grabs.

And they were being grabbed. As President Yeltsin put it, the economic assets of the state were being privatized “wildly, spontaneously, and often on a criminal basis.” He and his team of reformers were determined to regain control, to break up whatever remained from the command-and-control economy, and to replace it with a new economic system based upon private property. The objectives of privatization were not only economic; they also wanted to forestall any return to the communist past by removing assets from state control as quickly as possible. To make matters even more difficult, this economic upheaval took place against a backdrop of political turmoil: a standoff between the Yeltsin administration and the State Duma, or parliament, including a violent “siege” of the Duma; the first Chechnya war; and a 1996 presidential election that, until late in the campaign, seemed likely to end with a victory by resurgent communists.

The Soviet system had left many valuable legacies—a huge network of large industrial enterprises (though stranded in the 1960s in terms of technology); a vast military machine; and an extraordinary reservoir of scientific, mathematical, and technical talent, although disconnected from a commercial economy. The highly capable oil industry was burdened with an ageing infrastructure. Below ground lay all the enormous riches in the form of petroleum and other raw materials that Gorbachev had cited in his farewell address .7



RECONSTRUCTING THE OIL INDUSTRY

These natural resources—particularly oil and natural gas—were as critical to the new Russian state as they had been to the former Soviet Union. By the middle 1990s, oil export revenues accounted for as much as two thirds of the Russian government’s hard currency earnings. What happened to these revenues “dominated Russian politics and economic policy throughout the 1990s and into the 2000s.” Yet the oil sector was swept up in the same anarchy as the rest of the economy. Workers, who were not being paid, went on strike, shutting down the oil fields. Production and supply across the country were disrupted. Oil was being commandeered or stolen and sold for hard currency in the West. No one even knew who really owned the oil. Individual production organizations in various parts of West Siberia and elsewhere were busily declaring themselves independent and trying to go into business for themselves. The industry was suddenly being run by “nearly 2000 uncoordinated associations, enterprises and organizations belonging to the former Soviet industry ministry.” Amid such disruption and starved for investment, Russian oil output started to slip, and then collapse. In little more than half a decade, Russian production plummeted by almost 50 percent—an astonishing loss of more than 5 million barrels a day.

Privatization here, too, would be the answer. But how to do it? The oil industry was structured to meet the needs of a centrally planned system. It was organized horizontally, with different ministries—oil, refining and petrochemicals, and foreign trade—each controlling its segments of the industry. The resources industry was as important to the new state as to the old and had to be handled differently from the other privatizations.

One person with clearly thought-through ideas about what to do was Vagit Alekperov. Born in Baku, he had worked in the offshore Azerbaijani oil industry until transferring at age twenty-nine to the new heartland of Soviet oil, West Siberia. There he came to the attention of Valery Graifer, then leading West Siberia to its maximum performance. Recognizing Alekperov’s capabilities, Graifer promoted him to run one of the most important frontier regions in West Siberia. In 1990, Alekperov leapfrogged to Moscow, where he became deputy oil minister.

On trips to the West, Alekperov visited a number of petroleum companies. He saw a dramatically different way of operating an oil business. “It was a revelation,” he said. “Here was a type of organization that was flexible and capable, a company that was tackling all the issues at the same time—exploration, production, and engineering—and everybody pursing the common goal, and not each branch operating separately.” He came back to Moscow convinced that the typical organization found in the rest of the world—vertically integrated companies with exploration and production, refining and marketing all in one company—was the way to organize a modern oil industry. Prior to the collapse of the Soviet Union, his efforts to promote a vertically integrated state-owned oil company were rebuffed. Opponents accused him of “destroying the oil sector.” He tried again after Russia became an independent state. For to stay with the existing setup, he said, would result in chaos.8

In November 1992, President Yeltsin adopted this approach in Decree 1403 on privatization in the oil industry. The new law provided for three vertically integrated oil companies—Lukoil, Yukos, and Surgut. Each would combine upstream oil production areas with refining and marketing systems. They would become some of the largest companies in the world. The state would retain substantial ownership during a three-year transition period, while the new companies tried to assert control over now semi-independent individual production groups and refineries; quell rebellious subsidiaries; and capture control over oil sales, oil exports, and the hard currency that came from these transactions. The controlling shares for other companies in the oil industry were also parked for three years in what was to be a temporary state company, Rosneft, buying time for decisions about their future.

This restructuring would have been hard to do under any circumstances. It was very hard to do in the early and mid-1990s, when the state was very weak and law and order was in short supply. There was violence at every level, as Russian mafyias—gangs, scarily tattooed veterans of prison camps, and petty criminals—ran protection rackets, stole crude oil and refined products, and sought to steal assets from local distribution terminals. As the gangs battled for control, a contract, all too often, referred not to a legal agreement but to a hired killing. In the oil towns, the competing gangs tried to take over whole swaths of the local economy—from the outdoor markets to the hotels and even the train stations. The incentives were clear: oil was wealth, and getting control of some part of the business was the way to quickly amass wealth on a scale that could not even have been dreamed about in Soviet days, just a few years earlier.9

But eventually the state reasserted its police powers, and the newly established oil companies built up their own security forces, often with experienced veterans of the KGB, and the bloody tide of violence and gang wars began to recede.



LUKOIL AND SURGUT

Meanwhile, following on Yeltsin’s privatization decree, the Russian oil majors were beginning to take shape.

The most visible was Lukoil. Vagit Alekperov, equipped with a clear vision of an integrated oil company, set about building it as quickly as possible. The first thing was to pull together a host of disparate oil production organizations and refineries that had heretofore had no connection. He barnstormed around the country trying to persuade the managements of each organization to join this unfamiliar new entity called Lukoil. In order for Lukoil to come into existence, every single entity had to sign on. “The hardest thing was to convince the managers to unite their interests,” said Alekperov. “There was chaos in the country, and we all had to survive, we had to pay wages, and keep the entities together. Without uniting, we would not be able to survive.” They heard the message, all signed on, and Lukoil became a real company.

Alekperov recognized the heavy burdens that the new Russian companies carried—what he called their “Soviet legacy” of “aged equipment along with obsolete manpower and production management systems.” Lukoil had to target “the best international practices.” From the beginning, Alekperov put in place international standards and used international law firms, accountants, and bankers. In 1995 the chief financial officer of the American oil company ARCO came across an article about Lukoil in the Economist magazine. He found it intriguing enough that he followed up, and ARCO subsequently bought a share of Lukoil. From the early days, Lukoil also pursued an international strategy, first in the other new nations of the former Soviet Union and then in other parts of the world.

If Lukoil was the most international of the new Russian majors, Surgut was the most decidedly Russian. Its CEO, Vladimir Bogdanov, was called the “hermit oil man” by some. He had been born in a tiny Siberian village, made his name as a driller in Tyumen, and the enterprise he managed there became the basis of what emerged as Surgutneftegaz, better known by its short name, Surgut. He never moved to Moscow, instead keeping Surgut’s headquarters in the city of Surgut. As he once explained, he liked to walk to work.10

Both Lukoil and Surgut were run by people who would have been qualified as “oil generals” under the Soviet system.



YUKOS: THE SALE OF THE CENTURY

Very different was a company called Yukos. It was one of the first oil companies to be run by one of the new oligarchs who had emerged not from the oil industry but out of the chaotic barter economy.

Mikhail Khodorkovsky had started off with orthodox Soviet ambitions: as a child, he announced that his objective was to rise to the highest levels of the Soviet industrial system and achieve the vaunted position of factory director. Later, while a student at the Mendeleev Institute for Chemistry, he jumped into business as a leader of the school’s Komsomol, the communist youth organization, turning it into a commercial organization. He then moved into trading in imported computers and software and then, in the late 1980s, set up a bank called Menatep, which would soon be regarded as serious enough to be entrusted with government accounts. It also provided finance to one of the new oil companies, Yukos.

Khodorkovsky soon concluded that oil was an even better business than banking. The timing was right. By 1995 the Russian government was desperately short of funds, and some of the new businessmen and the Yeltsin government came up with a solution that went by the name of “loans-for-shares.” Businessmen would loan the Russian government money, taking highly discounted shares in petroleum and other companies as collateral. When the government, as anticipated, defaulted on the loans, the shares would end up as the property of the lenders. They would thus control these new companies. The government meanwhile got the short-term funding it needed to keep afloat prior to the 1996 presidential election. It was certainly an unusual way to privatize assets, and loans-for-shares was immortalized as the “sale of the century.” Khodorkovsky lent the Russian government $309 million and won control of Yukos’s shares.11

Khodorkovsky set about task number one, which was to gain control of the flows of oil and money, which seemed to be going in all directions. Khodorkovsky had never attended the Gubkin Institute or any of the other Soviet oil academies, and he had no particular attachment to the Soviet approach to field development. And so he turned to Western oil field service companies to come in and apply Western development techniques, rather than Soviet techniques, to the oil fields. This would lead to dramatic improvements in output. (It would also, in later years, come back to haunt him, during his confrontation with the Russian government, with charges that he had violated recognized and sound “Russian” oil field production practices.) As his wealth and influence magnified, so did his ambitions.

These companies—Lukoil, Surgut, and Yukos—were the three majors. They were not alone by any means. There remained the state company, Rosneft; six “mini-majors”; and a number of other companies, including those owned or sponsored by oil-rich regional governments.

One of the mini-majors was TNK. A consortium of owners, the AAR group, came together to buy the company in 1997. They would become among the country’s most prominent oligarchs. Three of them came from the Alfa Bank. Mikhail Fridman was a graduate of the Institute of Steel and Alloys. He had worked for a couple of years in a factory, but when it became possible to go into business in the late 1980s, he jumped in, starting a dizzying host of enterprises, ranging from a photo coop to window washing. Despite the chaos and being told that his businesses could not succeed, Fridman later said, “we did have an internal conviction.” His partner German Khan, another graduate of the Institute of Steel and Alloys, ran what became the oil trading part of their new enterprise and would remain the most focused on the oil business itself. The money they made from trading commodities enabled them to set up the Alfa Bank. A third partner was Peter Aven, who had already established his reputation as an academic mathematician and had been minister of foreign trade in the early 1990s.

The other members of the consortium included Viktor Vekselberg, who trained in transportation engineering, and Len Blavatnik, who had emigrated to the United States at age 21 and worked his way through Harvard Business School after a stint as a computer programmer. Blavatnik made his first trip back to the Soviet Union in 1988. It was a different country. He returned again in 1991—now it was Russia—and became serious about investing in a newly independent Russia, which led him to join up with the others in TNK. For its part, TNK controlled half the Samotlor oil field in western Siberia. It was a most desirable jewel—among the half dozen largest oil fields in the world.

There was another prominent company—Sibneft, as in Siberian Oil. This was the most classic of the loans-for-shares deals. Roman Abramovich, who had been trading everything from oil to children’s toys, teamed up with Boris Berezovsky and lent $100 million to the impoverished Russian government for half the company. When, as anticipated, the government failed to repay the loans, these oligarchs had control. Berezovsky went into political exile after falling out with President Vladimir Putin. Abramovich followed a different path. He took on the additional duties of governor of an impoverished region in the Russian Far East. Abramovich eventually sold Sibneft to the Russian gas giant Gazprom and moved to England, where he was said to be the second-richest person in the country, exceeded only by the Queen herself.12

Overall, by 1998, within six years of the collapse of the Soviet Union, the Russian oil industry had gone from a system run by a series of ministries and subordinated to central planning to a system of large vertically integrated companies, organized, at least in rough outline, similarly to the traditional companies in the West. During these years, they all operated largely autonomously from the state. Eventually the Russian Federation would have five large energy companies, each of whose oil reserves were comparable to the size of the largest western majors.

The development of these companies was more than just a wholesale reconstruction of the Russian oil industry. It also brought visible changes in the larger cities. In Soviet times, those few lucky enough to own automobiles had to search out the rare and hard-to-find dingy service stations on the outskirts of the city. But now new, modern service stations were springing up at intersections and alongside the highways, bedecked with shiny corporate logos—Lukoil, Yukos, Surgut, TNK, and a number of others. The stations came equipped not only with high octane gasoline of dependable quality, but also in many cases things that people never expected to see, like convenience stores and, even more remarkable, automatic car washes. All of that would also have been unimaginable in Soviet times.



OPENING UP

How did this new Russian oil industry look to the rest of the world? In 1992 the head of one of the world’s largest state-owned oil companies was asked what he thought about Russia and all the changes that were happening there. His answer was very simple. “When I think of Russia,” he said without a pause, “I think of it as a competitor.”

Others saw opportunity. For many decades after the 1917 Bolshevik Revolution, the Soviet Union had been closed off, an almost forbidden place, another world. The Soviet oil industry operated largely in isolation, with little of the flow of technology and equipment that was common in the rest of the world.

In the late Gorbachev years, at the end of the 1980s, the Soviet Union started to open the doors to joint ventures with Western companies. The objective was to bring in the technology it needed to improve the performance of the Soviet industry. Then came the collapse of the Soviet Union. This provided a vast new prospect to Western companies: the potential to participate in a region rich with hydrocarbons, perhaps comparable to the Middle East in the scale of resources, and world-class opportunities. They dispatched teams to research these opportunities.

Some concluded that, whatever the “Russian risk,” they simply could not afford not to be in Russia. “When you looked at the opportunity, you became enthusiastic,” recalled Archie Dunham, then CEO of the U.S. major Conoco. “It was just a huge opportunity.” But, as time went on, the Western companies learned how difficult it was to work in the Russian Federation. As Dunham added, “You had a rule of law problem, you had a tax problem, and you had a logistical problem.”

The uncertain political environment, the shifting cast of characters, the corruption, the security risks, the opaque and constantly changing rules, the uncertainty as to “who was who” and “who was behind who”—all of these made others more reluctant. “We had opportunities all over the world,” said Lucio Noto, CEO of Mobil. “Once you sink a couple of billion dollars into the ground, you can’t move it.”13



When the Western companies looked across the panorama—at the operating conditions, the equipment, and the fields—they saw an industry that was suffering from decades of isolation and that lacked the most up-to-date equipment, advanced skills, and sufficient computing power. They recognized that Russian geoscientists were at the forefront of their disciplines, but that, in Russia, “theory” was quite separated from “practice.” They also saw the dire situation in the Russian oil fields and the desperate need for investment. The Westerners were convinced that they would be welcome because they brought technology, capital, expertise, and management skills. That is not how Russian oil people looked at it, however. They took great pride in what the Soviet industry had accomplished, they were confident in their own skills, and they enormously resented the implication that they were not up to world standards. The Russian industry, in their view, did not need outsiders telling them what to do. Nor did it need substantial direct foreign participation in order to transfer technology. If the Russians needed technology, they could buy it on the world market from service companies.

Neither the government nor the emerging Russian business and political classes saw any reason to give up control over any substantial resources to Western companies. They may not have agreed among themselves as to who would ultimately own those resources and control the wealth so generated, but the one thing on which they could all agree was that it should not be the foreigners.

The major Western companies could not operate on any scale (with one major exception) in the core; that is, the traditional areas of current large production, the “brown fields” of West Siberia. Rather it was in those the areas where there was little development and major technical challenges to be overcome and where the Western companies thus had competitive advantage in terms of technology and execution of complex projects.



THE PERIPHERIES

In partnership with Lukoil, Conoco took on a project in the northern Arctic region. Conoco brought the know-how to Russia it had learned from Alaska, where new technologies had been developed in order to minimize the footprint in Arctic regions. Even so, the Polar Lights project was constantly bedeviled by an endless profusion of new tax charges and new regulations. The local regional boss, a former snowmobile mechanic, was known to demand a payment every time a new permit came up. Finally, Conoco had to tell Moscow that it was going to pull out altogether if the “extra-contractual” demands did not cease.14

Both Exxon and Shell went to Sakhalin, the six-hundred-mile-long island off the coast of Russia’s far east, north of Japan, where there was some minor onshore production. While the technical challenges were immense there, so was the apparent potential, especially offshore. Though the region was almost totally devoid of the infrastructure that the planned megaprojects would need, it had other important advantages. Sakhalin was as far from Moscow as one could get and still be in Russia. It was also on the open sea, so that output could be exported directly to world markets.

Exxon became the operator for a project that also included the Russian state company Rosneft, Japanese companies, and India’s national oil company. Within ExxonMobil, some considered this the most complex project that the company had ever undertaken up to that time—working in a remote, undeveloped subarctic area, where icebergs are a chronic problem, winds are hurricane strength for several months a year, and temperatures can drop to −40° or even lower. The conditions were so difficult, in fact, that work could only be done for five months a year. In the middle of development, as new complexities emerged, the engineers concluded that they needed to go back and redesign the whole project. The project, initially scoped out in the early 1990s, took a decade before it produced “first oil” and a decade and a half before it reached full production—all this at a cost approaching $7 billion.15

Shell’s Sakhalin-2 also began in the early 1990s with the same environmental challenges. It would prove to be the largest combined oil and gas project in the world, not just a megaproject, but equivalent to five world-class megaprojects in scale and complexity. Shell faced the additional challenges of building two five-hundred-mile pipelines—one oil and one gas—that had to cross more than a thousand rivers and streams, through terrain frozen in the winter and soggy in the summer. To get the oil and gas to export facilities ended up costing more than $20 billion.



IN THE HEARTLAND

Only one Western company managed to gain a significant position in the heartland, West Siberia. Sidanco was a second-tier Russian major that had been bought by a group of oligarchs in one of the loans-for-shares deals in 1995. It had one jewel: partial ownership (along with TNK) of Samotlor, the largest oil field in West Siberia. BP bought ten percent of Sidanco for $571 million in 1997. Some members of BP’s board thought it was a harebrained scheme; it was hard to make the case that Russia was a country with rule of law. But BP chief executive John Browne argued it was the only obvious way to get into West Siberia, and Russia was central to BP’s overall global strategy. Nonetheless, he added, “we should consider it an outright gamble. We could lose it all.” 16

It soon appeared that Browne’s caveat was even more warranted than he might have anticipated. For strange things began to happen. Under the guise of a newly approved Russian bankruptcy law, subsidiaries of Sidanco kept disappearing in a series of bankruptcy proceedings in various out-of-the-way Siberian courts. It became apparent that these were manufactured bankruptcies. The “creditors” were proving very adept at taking advantage of provisions in Russia’s new bankruptcy law to take ownership of the subsidiaries. It looked as though Sidanco might end up a shell, and BP with little or nothing to show for its $571 million.

In due course it emerged that what was going on was a struggle between two groups of oligarchs who had jointly participated in the original loans-for-shares acquisition of Sidanco and then had a bitter falling-out. The AAR group believed that its partner, Interros, had tricked it into selling out at a greatly discounted price prior the BP deal. And now AAR wanted back in. BP was really a bystander, but its prospects for protecting its position in Russia did not look at all good. Outside Russia was a different matter. AAR also owned TNK. At this point, TNK had very few financial resources of its own but needed considerable investment to maintain and develop its share of Samotlor. So it was turning to Western credit markets to finance its activities. But then Western credit lines, on which TNK depended, were one after another shutting down. TNK could certainly prevail within Russia, but BP held high cards and influence outside Russia. That was sufficient to force the parties to the negotiating table: the dissident oligarchs and their company TNK gained a major share of Sidanco. Yet BP had preserved its role as the only Western company to have found away into a significant position in the heartland of Russian oil—in West Siberia.

By this time, politics in Russia had changed, and so had the position of the Russian government.



“A GREAT ECONOMIC POWER”

With the end of the Cold War, Vladimir Putin, who had been a KGB officer stationed in Dresden in East Germany, returned to his home town of St. Petersburg and joined the city government. When the reformist mayor for whom he worked as a deputy mayor was defeated, Putin was without a job. Then his country house burned down. He enrolled to do a doctorate in the St. Petersburg Mining Institute. His studies there would help shape his view of Russia’s future.

In 1999, Putin published an article in the institute’s journal on “Mineral Natural Resources” that argued that Russia’s oil and gas resources were key to economic recovery and to the “entry of Russia into the world economy” and for making Russia “a great economic power.” Given their central strategic importance, these resources had to be, ultimately, under the aegis, if not direct control, of the state.

By the time the article was in print, Putin himself was already in Moscow, rapidly ascending in a series of jobs—including head of the FSB, successor to the KGB, and then prime minister. On the last day of December 1999 Boris Yeltsin abruptly resigned and Vladimir Putin, without a job just three years earlier, became Russia’s acting president.

In July 2000, two months after his official election, Putin met in the Kremlin with some of the rich and powerful businessmen known by then as oligarchs. He very clearly laid down the new ground rules. They could retain their assets, but they were not to cross the line to try to become kingmakers or in other ways control political outcomes. Two of the oligarchs who did not listen closely were soon in exile.



TNK-BP “50/50”

Once its deal with TNK had been concluded, BP began looking at the possibility of a merger of interests. Given their recent struggle over Sidanco, there was wariness on both sides. After intense negotiations, the two groups agreed to combine their oil assets in Russia with 50/50 ownership of the new firm, TNK-BP. BP wanted 51 percent, but this was never going to be possible. As John Browne later said, “We could not have it.” On the other hand, it could not go ahead in a minority position of 49 percent. The result was equal ownership. President Putin gave his approval, though with a word of advice. “It’s up to you,” he said to Browne. But he added, “An equal split never works.” The deal went forward. At a ceremony in Lancaster House in London in 2003, Browne and Fridman signed documents for the new company, with Vladimir Putin and British Prime Minister Tony Blair standing behind them, overseeing the signatures. The new TNK-BP represented the largest direct foreign investment in Russia. At the same time it was a Russian company. The new combination modernized the oil fields and increased production rapidly. It also increased BP’s total reserves by a third, and it pushed BP ahead of Shell to be the second largest company, after ExxonMobil. But a few years later, bearing out Putin’s adage, a fierce battle erupted over control and as to exactly what 50/50 meant. Eventually, after much tension, the two sides came to a new compromise that modified the governance, shifting the balance toward the Russian partners while preserving BP’s position. Mikhail Fridman became the new CEO.17



YUKOS

By the time of Putin’s election in 2000, Mikhail Khodorkovsky of Yukos was already on his way to becoming the richest man in Russia. He had the reputation as an aggressive and ruthless businessman; but with the beginning of the new century he seemed to be remaking himself. He would compress three generations—ruthless robber baron, modernizing businessman, and philanthropist—into one. He brought in Western technology to transform Yukos into a far more efficient company. By importing Western-style corporate governance and listing his company on Western exchanges, he could greatly increase the valuation of Yukos and thus multiply his wealth several times over. Through his Open Russia Foundation, he became the biggest philanthropist in Russia, supporting civic and human rights organizations.

His spending on politics was also well known, indeed almost legendary in its extent, most notably in the money spent to ensure that deputies in the Duma voted exactly the way he wanted on tax legislation in May 2003. He seemed to be pursuing his own foreign policy. He negotiated directly with China on building a pipeline, bypassing the Kremlin on something of great strategic importance, and on which Putin had very different views. He was moving fast to acquire Sibneft, one of the other new Russian oil majors, which would make Yukos possibly the largest oil company in the world. And he was in talks with both Chevron and ExxonMobil about selling controlling interest in Yukos. When Putin met with the CEO of one of the western companies, he had many, many questions about how a deal would work and what it would mean. For it would have moved control over a substantial part of the country’s most important strategic asset, oil, out of Russia, which ran exactly counter to the principle that he had laid down in his 1999 article.

While moving on all these fronts at the same time, Khodorkovsky let it be widely known that he was prepared to spend money to move Russia toward being a parliamentary rather than a presidential democracy, with the implication that he intended to become prime minister. Selling part of Yukos would give him many billions of dollars that could go into that campaign.

And then there was what turned into a heated exchange with Putin at a meeting with the industrialists that was captured on video. “Corruption in the country is spreading ,” said Khodorkovsky. To which an angry Putin reminded him that he had won control over huge oil reserves for very little money. “And the question is, how did you obtain them?” said Putin. He then added, “I’m returning the hockey puck to you.”18

Several months later, in July 2003, one of Khodorkovsky’s business partners was arrested, and then others. Some of his advisers, fearing that he was becoming increasingly unrealistic, warned him to proceed with care, but he seemed to disregard them. On a visit to Washington in September 2003, he said that he thought there was a 40 percent chance he would be arrested. But he gave the impression that he did not believe that the real odds were anywhere near that high.

In the autumn of 2003, Khodorkovsky embarked on what looked like a campaign swing, with speeches and interviews and public meetings in cities across Siberia. In the early morning of October 23, his plane was on the ground in Novosibirsk, where it had stopped for refueling. At 5 a.m. FSB agents burst in and arrested him. In the spring of 2005, after a lengthy trial, Khodorkovsky was convicted of tax fraud and sent to a distant and isolated Siberian prison camp. In 2011, a second trial for embezzlement extended his sentence. By then, the case had become an international cause, exemplified when, after the trial, Amnesty International selected him as a “prisoner of conscience.”

As for Yukos, it was no more. It was dismantled and became a noncompany and was absorbed into Rosneft, which is now Russia’s largest oil company and, largely owned by the government, the national champion.



“STRATEGIC RESOURCES”

“Strategic resources” came to the fore in other ways as well. ExxonMobil’s Sakhalin-1 project had a Russian company as partner, Rosneft. But Shell’s Sakhalin-2 did not. Gazprom may have been the largest gas company in the world, but it had no representation in liquefied natural gas (LNG), and no capacity to market to Asia. Over several months in 2006, the Sakhalin-2 project was charged with a litany of various environmental violations that carried a variety of penalties, some of them severe. At the end of December 2006, Shell and its Japanese partners accepted Gazprom as majority shareholder. The project thereafter continued on course and in 2009 began exporting LNG to Asia and even as far away as Spain.



OIL AND RUSSIA’S FUTURE

By the second decade of the twenty-first century, Russia was back as an oil producer. Its output was as high as it had been in the twilight of the Soviet Union, two decades earlier, but on very different terms. The oil industry was integrated technologically with the rest of the world; and it was no longer the province of a single all-encompassing ministry, but rather was operated by a variety of companies with many differences in leadership, culture, and approaches. When it was all added up, Russia was once again the largest producer of oil and the second largest exporter in the world.

Once, as Russian production and oil revenues were ramping up, Vladimir Putin was asked if Russia was an energy superpower. He replied that he did not like the phrase. “Superpower,” he said, was “the word we used during the Cold War,” and the Cold War was over. “I have never referred to Russia as an energy superpower. But we do have greater possibilities than almost any other country in the world. If put together Russia’s energy potential in all areas, oil, gas, and nuclear, our country is unquestionably the leader.”

Certainly Russia’s energy resources—and its markets—put it in a position of preeminence; and with a new uncertainty about the Middle East, it took on a renewed salience as an energy supplier and in terms of energy security.

Oil and gas were also what powered its own economy. As Putin had written in his 1999 article, they had indeed been the engine of Russia’s recovery and growth—and the number one source of government revenues. High prices meant even more money flowing into the nation’s treasury. The country’s demographics made those revenues even more critical—in order to meet the pension needs of an aging population.

But the heavy reliance on oil and gas stirred a national debate about the country’s heavy dependence on that one sector and about the need for “modernization,” which meant, in part, diversification away from hydrocarbons. But modernization was hard to achieve without broad-ranging reforms of the economy and legal and governmental institutions, along with a nurturing of a culture of entrepreneurship. Some argued that high oil prices, by creating a cushion of wealth, made it easier to postpone reform. Whatever the progress on modernization, oil and gas would continue to be the country’s greatest source of wealth for some years to come, as well as an arena in its own right for advanced technology.

But the very importance of oil and gas highlighted a different kind of risk: would Russia be able to maintain its level of output or was another great decline in the offing? The latter would threaten the economy. Some argued that Russia would not be able to sustain production without big changes—a step up in new investment, a tax regime that encouraged investment, augmentation of technology, and, of critical importance, the development of the “next generation” of oil and gas fields. One of the major targets for that next generation was the offshore, particularly in the Arctic regions, off the northern coast of Russia.

Developing those frontier regions would be challenging and costly and even more complex than the Sakhalin projects. Once again, here was the potential for a significant role for international companies. These would be the projects for which Western partners would be sought, especially the large majors with their capabilities to execute projects on that scale. Yet undertaking them would require considerable confidence on both sides. For these would be very long-term relationships; the development time would be measured not in years, but decades, and their full impact would likely be felt nearer the middle of the twenty-first century, rather than the beginning. But that was still prospect.

For the Western companies—save for those long-range projects in places like the Arctic—there was not much more in the way of large opportunities beyond what had already been launched in the 1990s. As things had turned out, the early expectations about Russia had proved to be much larger than the reality.



When it came to oil and gas, however, there had been more opportunity to be found in the former Soviet Union than just in the Russian Federation. Much more. And it was to the rest of the region that attention had also turned in the late 1980s and early 1990s as the Soviet system was disintegrating.


2

THE CASPIAN DERBY

In the late 1980s and the beginning of the 1990s, as the Soviet Union started to come unhinged, the first Western oil men had begun to drift down toward the south, to the Caspian and into Central Asia, into what would after 1991 become the newly independent countries of Azerbaijan, Kazakhstan, and Turkmenistan.

Historically, the most important city on the Caspian coastline was Baku. A century earlier, Baku had been a hub of great commercial and entrepreneurial activity, with grand palaces, built by nineteenth-century oil tycoons, and one of the world’s great opera houses. But what these arriving oil men now found instead, amid the splintering of the Soviet Union, were the remnants of a once-vibrant industry and what seemed almost like a museum of the history of oil.

The interaction between these oil men and the newly emerging nations would help wrest these countries out of their isolated histories and connect them to the world economy. The results would redraw the map of world oil and bring into the global market an oil region that, by the second decade of the twenty-first century, would rival such established provinces as the North Sea, and would include the world’s third-largest producing oil field.

The development of the Caspian oil and natural gas resources was inextricably entangled with geopolitics and the ambitions of nations. It would also help define what the new world—the world after the Cold War—would look like and how it would operate.

At the center is the Caspian Sea itself, the world’s largest inland body of water, with 3,300 miles of coastline. Though not connected to any ocean, it is salty, and also subject to sudden, violent storms. Azerbaijan is on its western shore. To the west of Azerbaijan are Georgia and Armenia—the three together constituting the South Caucasus. On the northwest side of the Caspian, above Azerbaijan, are Russia and its turbulent North Caucasus region, including Chechnya. On the northeast side of the Caspian is Kazakhstan; and, on the southeast, Turkmenistan. On the southern shore is Iran, with ambitions to be a dominant regional power and with interests going back to the dynasties of the Persian shahs.



THE NEW GREAT GAME

The fierce vortex of competing interests in this region came to be known as the new “Great Game.” The term had originally been attributed to Arthur Conolly, a cavalry officer in the British army in India turned explorer and spy, whose unfortunate end in 1842—he was executed by the local ruler in the ancient Central Asian town of Bukhara—captured both the seriousness and futility of the game. But it was Rudyard Kipling who took up the phrase and made it famous in Kim, his novel about a British spy and adventurer, at the front line in the late nineteenth century in the contest with the Russian Empire.1

But this purported new round in the Great Game, at the end of the twentieth century, included not just Russia and Britain, the two main contenders from the first round in the nineteenth century, but many more—the United States, Turkey, Iran, and, later, China. And of course the newly independent countries themselves were players, intent on balancing among these various contending forces to establish and then preserve their independence.

Then there were the oil and gas companies, eager to add major new reserves and determined not to be left out. And hardly to be overlooked was the jostling of the wheelers-dealers, the operators, the finders, and the facilitators, all of them out for their cut. This is a grand tradition established in the first decades of the twentieth century by the greatest oil wheeler-dealer of them all, Calouste Gulbenkian, later immortalized as “Mr. Five Percent.”


The Quest

CASPIAN SEA AND THE CAUCASUS: THE “NEWLY INDEPENDENT STATES”

The breakup of the Union reconnected a resource-rich region to world energy markets


Rather than the Great Game, others used the less dramatic shorthand of “pipeline politics” to convey the fact that the decisive clash was not that of weapons but of the routes by which oil and natural gas from the landlocked Caspian would get to the world’s markets. But to some, watching the collisions and the confusion among the players, hearing the cacophony of charges and countercharges and the bluster and banging of deal making, it was better described as the Caspian Derby. Whatever the name, the prize was the oil and natural gas—who would produce it, and who could succeed in getting it to market.



THE PLAYERS

The Soviet Union was gone. But Russian interests were not. The economies of Russia and the newly independent nations were highly integrated in everything from infrastructure to the movements of people. Russian military bases, as legatees of the Soviet military, were scattered throughout the region. What would be the nature of Russia’s relations with the newly independent states, many of which had been khanates in the centuries before their conquest by the Russian Empire but had never really existed as modern nation-states?

For the Russians, it was about power and position and restoring their country as a great power. They had hardly expected the Soviet Union to fall apart. Many Russians had come to regret this loss and regarded the dissolution of the Soviet Union as a nation (if not as a communist state) as a humiliation, as something that had been foisted upon them by malevolent forces from outside, specifically in the view of some, the United States. Immediately after the breakup, they began to describe these newly established countries as belonging to a newly conceived region, the “Near Abroad,” over which they wanted to reassert control. That very name also conveyed a special status with special prerogatives for Russia—and all the more so because of the large numbers of ethnic Russians who lived in what were now independent countries. While there might now be formal boundaries, Russia and these new nations were bound together by history, education, economic and military links, the Russian language, and ideology and common culture—and a multitude of marriages. In Moscow’s view, they belonged very much in Russia’s sphere of influence and under its tutelage. Russians saw Western influence in the Near Abroad as an attempt to further undermine Russia and retard the restoration of its Great Power status.2

And there was the specific matter of oil. From the Bolshevik Revolution onward, the Caspian’s petroleum resources had been developed by the Soviet oil industry with Soviet technology and Soviet investment. The Soviets had begun to bring on a very large, if also very difficult, new field in the Republic of Kazakhstan, and the Soviet oil generals had been talking, before the breakup, about renewed focus on the Caspian as a production area.

Some Russians also believed, or at least half believed, that the United States had deliberately orchestrated the collapse of the Soviet Union for the specific purpose of getting its hands on Caspian oil. Once, in the mid-1990s, the Russian energy minister was innocently asked what he thought of the development of Caspian oil. He pounded his fist down on his conference table.

“Eto nash neft,” he replied. “It’s our oil.”

For the United States and Britain, the consolidation of the newly independent nations was part of the unfinished business of the post–Cold War and what was required for a new, more peaceful world order. This was these nations’ opportunity to realize the Wilsonian dream of self-determination. An exclusive Russian sphere of influence would, in the American and British view, be dangerous and destabilizing. Moreover, there was the risk of Iran’s filling a vacuum, which, though not often stated, was very much on their minds.

The energy dimension also loomed large for Washington in the early 1990s. Saddam’s grab for Kuwait and the Gulf War, just concluded, had once again demonstrated the risks of the world’s overdependence on the Persian Gulf. If the Caspian could be reintegrated into the world energy industry, as it had been prior to World War I, if major new petroleum resources from the region could be brought to the world market, that would be a very large step in diversification of petroleum supplies, making a most significant contribution to global energy security. To be prevented was the flip side—these resources slipping back under exclusive Russian sway or, even worse, under Iranian influence.

Yet at the same time, building a new relationship with Russia was at the very top of the priorities of the Clinton administration, and so there was little desire to have that relationship damaged by competition for Caspian oil and a modern Great Game. In a speech called “A Farewell to Flashman” (Flashman being a fictional swashbuckling British military man in the nineteenth-century Great Game), U.S. Deputy Secretary of State Strobe Talbott sketched out the goal of stable economic and political development in a critical crossroads of the world, and warned against the alternative—that “the region could become a breeding ground of terrorism, a hotbed of religious and political extremism, and a battleground for outright war.” He added, “It has been fashionable to proclaim . . . a replay of the ‘Great Game’ in the Caucasus and Central Asia . . . fueled and lubricated by oil.” But, he said, “Our goal is to actively discourage that atavistic outcome.” The Great Game, he added firmly, belonged “on the shelves of historical fiction.” Yet it would be very challenging to modulate the clash of interests and ambitions in this strategic terrain.3

For Turkey, locked out of the region for centuries, the breakup of the Soviet Union was a way to expand its influence and importance and commerce across the Black Sea into the Caucasus and onto the Caspian Sea and beyond—and also to connect with the Turkic peoples of Central Asia. And, for the Islamic Republic of Iran, here was the opportunity to expand its political and religious influence north into the other countries on the Caspian Sea and into Central Asia and to seek to proselytize among Islamic peoples whose access to Islamic religion had been tightly constrained during Soviet times.

Azerbaijan was of particular importance to Iran. Over 7.5 million ethnic Azeris lived there, now with the opportunity to interact with the outside world, while an estimated 16 million Iranians, a quarter of Iran’s total population, were also ethnically Azeri. Though generally tightly policed by Iran’s ruling theocracy, many Iranian Azeris had direct family relations in Azerbaijan. So for the regime in Tehran, an independent Azerbaijan, as an example of a more tolerant, secular and potentially prosperous society and one connected to the West, was something to be feared as a threat to its own internal control.

China’s interests developed more slowly, but they became progressively more significant as the rapid growth of its economy made energy an increasingly important issue. The Central Asian states were “next door,” and they could be connected by pipelines, providing critical diversification. China increasingly made its impact felt, but less through politics and more through investment.

The newly independent states were hardly mere pawns. Their leaders were determined to solidify their power. Although there were considerable differences among them, at home that meant what were essentially one-party states with power consolidated in the hands of the president. In foreign policy, the strategic objectives of these nations were very clear: maintain and consolidate their independence and establish themselves as nations. Whatever the differences in their views of the Kremlin, they did not want to find themselves reabsorbed one way or the other by the new Russian Federation. On the other hand, they were in no position to disengage from Russia or stoke its ire. They needed Russia. The connections were so many and so strong, and the geography so obvious. Moreover, they had to be concerned about their own ethnic populations in Moscow and the other Russian cities, whose remittances would become important components of their new national GNPs.

For many of the countries, oil and natural gas were potentially critical, an enormous source of revenues and the major driver of recovery and economic growth. The development of oil could bring in companies from many countries and generate not only cash but also political interest and support. As the Azeri national security adviser put it, “Oil is our strategy, it is our defense, it is our independence.”4

If oil was the physical resource they needed for their survival as nation-states, they also required another kind of resource—wily diplomacy. For the game, always, required extraordinary skill in balancing in a difficult terrain. Azerbaijan, a secular Islamic state, was squeezed between Iran and Russia. Kazakhstan, with a huge territory but relatively small population, had to find its balance between Russia and an increasingly self-confident and rapidly growing China.

Yet in all the discussions about oil and geopolitics and great games, one could not lose sight of the more practical matters: that oil development took place not only on the stage of world politics but on the playing fields of the petroleum industry—on the computer screens of engineers and spreadsheets of financial analysts, in the fabrication yards where the rigs were built, and on the drilling sites and offshore platforms—where the key considerations were geology and geography, engineering, costs, investment, logistics, and the mastery of technological complexity. And the risk for the companies was large—not just political risk, but the inherent risk in trying to develop new resources that might be world class but also posed great enormous engineering challenges.

The companies had to operate against extremes of expectations. For at one point, the Caspian was celebrated as a new El Dorado, a magical solution, another Persian Gulf, a region of huge riches in oil and gas resources eagerly waiting for the drill bit. At another time, it was a huge disappointment, a giant bust, one great dry hole beneath the wet seabed. So in terms of expectations, too, one had to stay sober and keep one’s balance.



“THE OIL KINGDOM”

In the late nineteenth century and early twentieth century, the Russian Empire, specifically the region around Baku on the Caspian Sea, had been one of the world’s major sources of oil. Indeed, at the very beginning of the twentieth century, it had overtaken western Pennsylvania to be the world’s number one source. Families with names like Nobel and Rothschild made fortunes there. Ludwig Nobel—brother of Alfred, the inventor of dynamite and endower of the Nobel Prizes—was known as the “Russian Rockefeller.” It was Ludwig Nobel who conceived and built the world’s first oil tanker, to transport petroleum on the stormy Caspian Sea. Shell Oil had been founded on the basis of oil from Baku, audaciously brought to world oil markets by an extraordinary entrepreneur and onetime shell merchant named Marcus Samuel. They shared the stage with prominent local oil tycoons of great influence.

The ascendancy of Baku would be undermined by political instability, beginning with the abortive revolution of 1905, what Vladimir Lenin dubbed the “great rehearsal.” In the years immediately after, the region continued to be shaken by revolutionary activity. Among those most active was a onetime Orthodox seminarian from neighboring Georgia, Iosif Dzhugashvili, better known to the world as Joseph Stalin. As Stalin later said, he honed his skills as “a journeyman for the revolution” working as an agitator and organizer in the oil fields. What he did not add were his additional activities as a sometime bank robber and extortionist. It was thus with good reason that Stalin, recognizing the wealth that was to be extorted, anointed Baku as the “the Oil Kingdom.”5

With the collapse of the Russian Empire at the outbreak of the Bolshevik Revolution during World War I, the region west of the Caspian Sea, including Baku, declared itself the independent Azerbaijan Democratic Republic. It established one of the first modern parliaments in the Islamic world. It was also the first Muslim country to grant women the right to vote (ahead of such countries as Britain and the United States). But Lenin declared that his new revolutionary state could not survive without Baku’s oil, and in 1920 the Bolsheviks conquered the republic, incorporating it into the new Soviet Union and nationalizing the oil fields.

That same year, however, Sir Henri Deterding, the head of Royal Dutch Shell, confidently declared, “The Bolsheviks will be cleared, not only out of the Caucasus, but out of the whole of Russia in about six months.” It soon became evident, however, that the Bolsheviks were not going anywhere soon, and that Western companies had no place in the new Soviet Union.

When, in June 1941, Hitler launched his invasion of the Soviet Union, Azerbaijan was one of his most important strategic objectives—he wanted to get his hands on an assured supply of oil to fuel his war machine. “Unless we get the Baku oil, the war is lost,” he told one of his generals. His forces got very close to Baku, but not close enough, owing to fierce resistance by the Soviets and the natural barriers imposed by the high mountains of the Caucasus. The failure was costly for Nazi Germany, for its severe shortage of oil crippled its military machine and was one of the reasons for its ultimate defeat.6

By the 1970s and 1980s, the Caspian had become an oil backwater of the Soviet Union, thought to be depleted or technologically too difficult; its once prominent role had been assumed by other producing regions, most notably West Siberia. In the late 1980s and early 1990s, however, as Soviet power crumbled and Azerbaijan, Kazakhstan, and Turkmenistan were moving toward, and then into, independence, the region’s potential—buttressed by advances in technology—once again loomed very large.



HISTORY ON DISPLAY

Baku and its environs stood at the historic center of what had been the Russian and then Soviet oil industry, and that entire history was on display for the wideeyed Western oil men who were beginning to show up.

Some of it was at sea. A rickety network of wooden walkways and platforms, connected like a little city, extended out from the seafront in Baku. Farther offshore, 40 miles from the coastline, where the seabed became shallow again, was Oily Rocks, a great network of walkways and platforms, “a wooden and steel oil town on stilts, 15 miles long and a half mile wide,” with 125 miles of road and a number of multistory apartment buildings built on artificial rock islands. Once it had been regarded as one of the great achievements of Soviet engineering, a “legend in the open sea.” But now Oily Rocks was so dilapidated that parts of it were crumbling and falling into the sea, and some parts were considered so treacherous that they had been abandoned and closed off altogether .7

Onshore, in and around Baku, were innumerable antique “nodding donkeys,” still bobbing up and down, helping to pump up oil from wells that had been drilled in the late nineteenth and early twentieth centuries. Hiking into the wide, dry Kirmaky Valley just north of Baku would take one back even earlier in time. There one would step over pipelines and clamber up barren hills that were pockmarked with hundreds of pits that been dug by hand in the eighteenth and nineteenth centuries. In those days, one or two men would be lowered into each of these narrow, dangerous pits, past walls reinforced with wood planks, 25 to 50 feet down to the claustrophobic bottom, where they would fill buckets with oil that would be hoisted out with primitive rope pulleys.

Down on the other side of the hill was the Balachanavaya Field, where a gusher had been drilled in 1871. That field was still crowded with old rigs, densely packed up against one another, some of them going back to the days of the Nobels and the Rothschilds. Altogether 5 billion barrels of oil had been extracted from the field, and it was still modestly producing away, while gas leaking from a nearby mountainside continued to burn in an “eternal flame.”

Thus, awaiting the arriving oil men in Azerbaijan was an industry deep into decline and decay, starved of investment, modern technology, and sheer attention. Yet what the oil men also saw, if not altogether clearly, was the opportunity—though tempered by many risks and uncertainties.



“ALL ROADS ARE THERE”

Azerbaijan was ground zero for the Caspian Derby. As a Russian energy minister put it, it was the “key” to the Caspian, for “all roads are there.” Every kind of issue was at play, and so many of them the result of geography. The most immediate problem was to the west, the newly independent state of Armenia, with which war had broken out over the disputed enclave Nagorno-Karabakh. Armenia, with some Russian support, was victorious; 800,000 ethnic Azeris, primarily from Nagorno-Karabakh, became refugees and “internally displaced peoples,” living in tent cities and corrugated tin huts and whatever else Azerbaijan could find for them. This displacement—equivalent to 10 percent of the Azeri population—added to the woes of what was already an impoverished country, with a broken-down infrastructure and teetering on economic collapse.

In the first years of the 1990s, various consortia of international oil companies pursued what has been described as “disruptive and complex negotiations” with successive Azeri governments, which had largely come to naught. The country itself seemed to be entrapped in endemic instability and insurgencies, and, as various clans struggled for power, headed toward civil war.8



“THE NATIVE SON”

During Soviet times, Heydar Aliyev had risen to the pinnacle of power in Azerbaijan, initially as a KGB general and then head of the local KGB, and then as first secretary of the Azeri Communist Party. He had subsequently moved to Moscow and into the ruling Politburo, becoming for a time one of the most powerful men in the Soviet Union. But after a fiery falling-out with Mikhail Gorbachev and a spectacular fall from power, he was expelled not only from the Politburo but also from Moscow, and denied even an apartment back in Baku. He returned to his boyhood home, Nakhichevan, an isolated corner of Azerbaijan, which, after the collapse of the Soviet Union, was cut off from the rest of the country by Armenia and was reachable only by occasional air flights from Baku. While in this internal exile, he discovered his new vocation and identity—no longer as a “Soviet man,” but, as he put it, as a “native son.” He bided his time.

With the political battle in Baku getting even hotter and the country teetering on civil war, he returned to the capital city and, in 1993, amid an attempted insurrection, took over as president. At age seventy, Aliyev was back in power. He brought stability. He also brought great skill to the job. “I’ve been in politics a long time, and I’ve seen it all from inside out as part of the core leadership of a world superpower,” he said not long after taking power. He was now an Azeri nationalist. He was also a proven master of tactics and a brilliant strategist. He would use Azerbaijan’s oil potential to turn the country into a real nation, and to enlist key nations in support of its integrity, and, in the process of doing all of this, ensure his own primacy and control. But he also knew the Russians and the mentality of Moscow as well as anyone, and he understood clearly how to deal with the Russians and how far he could safely tread out on his own path.9



“THE DEAL OF THE CENTURY”

In September 1994, Aliyev assembled a host of diplomats and oil executives in the Gulistan Palace banquet hall in Baku for the signing of what he proclaimed the “deal of the century.” The signatories included ten oil companies—representing six different nations—that belonged to what was now the Azerbaijan International Operating Company (AIOC) plus the State Oil Company of Azerbaijan Republic (SOCAR), the Azeri state company. BP and Amoco were the dominant Western companies, but also, and of great significance, in the deal was Lukoil, the Russian company. Later the Japanese trading company Itochu joined the AIOC, bringing the number of national flags to seven. Given the complexities and uncertainties, some mumbled that a better sobriquet than “deal of the century” would be “Mission Impossible.” After all, how was this going to get done? And how was landlocked Azerbaijan ever going to get its oil to the world market? Yet as the CEO of one of the Western companies put it, “the oil had to go somewhere.”10

Moreover, even with Aliyev in power, the political situation was far from stable. Baku was under nightly curfew, and, shortly after the signing of the “deal of the century,” two of Aliyev’s closest aides were assassinated, including his security chief, to be followed by a failed military coup.

The object of the “deal of the century” was the huge Azeri-Chirag-Gunashli field (ACG) in the Aspheron trend, seventy-five miles offshore. It had been discovered prior to the collapse of the Soviet Union, but it was a mostly undeveloped project, and a very challenging one. Much of it had proved well beyond the technological capabilities of the Soviet oil industry. However, during Soviet times, development had started in a more shallow corner of the field, and if the platform could be successfully refurbished and upgraded to international standards, some early production would be possible. This would become known as Early Oil. It was desirable, because it would create an early income stream and, perhaps even more important, build confidence among the AIOC shareholders.



WHAT ROUTE FOR EARLY OIL?

But Early Oil was also highly contentious, for it would create a big and immediate problem. How to get the oil out? Once ashore, some of it could be shipped in railway tank cars, just as in the nineteenth century, but that was a limited and hardly satisfactory alternative.

The only obvious answer was a pipeline. And, with that answer, the Caspian Derby turned clamorous. By reversing directions, the oil could go north through the existing Russian pipeline system, which is of course exactly what the Russians wanted. But that would also have given Russia very considerable leverage over Azerbaijan’s economic and political fate, and the United States strenuously opposed it.

The other option for the Early Oil pipeline was to go west into Georgia and to the Black Sea, where tankers would pick up the oil and carry it through the Bosporus to the Mediterranean—a route that tracked what had been the main outlet for nineteenth-century Baku oil. But that would make Azerbaijan dependent on Georgia, which was wracked by separatist struggles and which had a very tense and uneasy relationship with Russia. This route would also be a great deal more expensive, entailing much more construction in difficult terrain. The AIOC was under great pressure to choose. The Azeris needed revenues ; the companies needed clarity. But the United States and Russia were at loggerheads. Yet something needed to be done. One way or the other, Early Oil was coming.



THE TWO-TRACK STRATEGY: “OFFEND NO ONE”

In a nondescript conference room in central London, some senior AIOC staff and a small group of oil and regional experts debated the choices—“Early Oil Goes North” and “Early Oil Goes West”—and the likely backlash to each. It was recognized that “an unequivocal choice in either direction would be perilous from the standpoint of political risk.”

Finally, one of the participants who had sat quietly in the corner spoke up. Why choose ? he asked. Why not do both? The more pipelines, the better. Even if the cost was higher, dual pipelines would provide more security. It would be a great insurance policy. That approach would also help assure speed and discourage foot dragging—since the AIOC could always threaten to go with the “other” option. So taken together, two routes made a lot of sense.11

Of course, one had to start somewhere. And that meant starting with the Russian route. After all, a pipeline was in place. The politics were right.

Heydar Aliyev saw it that way. On a dreary, cold February night in 1995, in his office in the hills above Baku, Aliyev gave his marching instructions both to Terence Adams, the head of the AIOC, and to the head of SOCAR. Nothing should be done that would “alienate” the Russians, said the president. It was too risky. A contract had to be signed with the Russians before anything else was done. “The geopolitical imperative could not have been made clearer for Baku oil diplomacy,” Adams later said. The president made one other thing very clear. Failure in any form would be a major disaster for Azerbaijan, and thus would certainly also be a disaster for AIOC and personally for all those involved. He looked hard at both men. At the same time, Aliyev emphasized that the relationship with the United States was also essential to his strategy. His message to the oil companies was challenging but clear: “Offend no one.”

Things were also changing with the United States. There had been a very sharp debate in Washington between those highly suspicious of Russia, who favored an “anything but Russia” pipeline policy, and those who believed that a collaborative approach with Moscow was required for the development of energy resources and transportation in the former Soviet Union. And, in the latter view, that development was necessary to meet the two objectives: helping to consolidate the nationhood of the newly independent states and enhancing energy security by bringing additional supplies to the world market. In due course, matters were generally—although never completely—resolved in favor of the more collaborative approach. In February 1996, the northern route won official approval.12

Agreement for the western Early Oil route soon followed. For its part, the Georgian route offered a counterbalance to the Russians. Getting this plan done drew upon the personal relationship between Aliyev and Georgian president Eduard Shevardnadze, whose career, like Aliyev’s, had tracked from the local communist security service to leader of the Georgian communist party to the pinnacle of Soviet power in the Kremlin as Mikhail Gorbachev’s foreign minister—and, thus, the opposite number of U.S. Secretary of State James Baker in negotiating the end of the Cold War. Now Shevardnadze, who had returned as president to Georgia after the breakup of the Soviet Union, was negotiating a pipeline whose transit fees would be important to keeping impoverished, independent Georgia afloat. Even more important was the geopolitical capital that Georgia gained from U.S., British, and Turkish engagement with which to balance against the Russian giant to the north.


The Quest

PIPELINE POLITICS

The battles over pipeline routes for oil and gas became known as the Caspian Derby.

Source: IHS CERA


By 1999 both Early Oil export lines were operating. The western route tracked the old wooden pipeline built by the Nobels in the nineteenth century. The Russian northern line passed through Chechnya, where in that same year the second Chechen War would erupt between Russian forces and Islamic rebels. That conflict forced the shutdown of the Russian pipeline. This proved the insurance value of a second, western Early Oil line through Georgia.

That took care of Early Oil. Meanwhile, as the decade progressed, the technical challenges were being surmounted offshore of Azerbaijan, and it was clear that very substantial additional production would begin in the new century. The resources had been “proved up”: oil could actually be economically extracted in large volumes from beneath the Caspian Waters.



WHAT ROUTE FOR THE MAIN PIPELINE?

Now that the resources were bankable, a main export pipeline capable of transporting much greater volumes had to be built. It was back to the same battles as over Early Oil. This time, however, there could be only one pipeline. Given the costs and scale, the difference could not be split between two lines. The Russians, of course, wanted the pipeline to go north and flow into their national pipeline system, which would give them some degree of control and leverage over the Caspian resources. Another option was to go through Georgia. But in both cases, the oil would have to be picked up by tankers that would carry it across the Black Sea and then sail through the Bosporus, the narrow strait that runs through the middle of Istanbul. And that was a big problem.

The Bosporus, which connects the Black Sea and the Mediterranean and is the demarcation between Europe and Asia, has loomed large throughout history. It was on its banks that, in the fourth century A.D., the Roman emperor Constantine established his new eastern capital—Constantinople—in order to better manage the far-flung Roman Empire. In more recent centuries, it was of great strategic importance for both the Russian and Soviet empires, as the only warm-water ports for their fleets were on the Black Sea, and their warships had to pass through the Bosporus to reach the world’s oceans.

But the Bosporus was becoming increasingly crowded with the growing fleet of oil tankers that would carry Russian and Caspian oil to the world’s markets. And the Bosporus was no isolated waterway; it ran right through the middle of Istanbul (as Constantinople had been officially renamed in 1930), a city of 11 million people. Turkey was apprehensive of a major tanker accident in what in effect was Istanbul’s living room. And with good reason. The 19-mile waterway has 12 turns. Its narrowest point is 739 yards, which requires a 45-degree turn. Another turn is 80 degrees, almost a right angle. 13

There was still another option for the main outlet, and in dollars and cents, the cheapest of all. Go south and deliver oil to refineries in northern Iran, which would supply Tehran. And then swap an equivalent amount of oil from fields in the south of Iran for export via the Persian Gulf. Hence, it would not be necessary to build a pipeline through Iran. Such a swap was the least cost option in economic terms. But it was wholly unacceptable to the United States and other Western countries, and thus a complete nonstarter. It would not only have bolstered Iran, but would have given the nation the trigger finger over Azerbaijan’s future, which was hardly something that Heydar Aliyev wanted. Moreover, it would have completely undercut the whole quest for diversification and energy security by putting more oil into the Persian Gulf and increasing dependence on the Strait of Hormuz, when the whole point was to diversify away from it.

There was one more option—go west, skirting around Armenia into Georgia, and then turn left near the Georgian capital of Tbilisi and head south down through Turkey to its port of Ceyhan on the Mediterranean. This was the most logical route. The problems with the proposed BTC pipeline—Baku to Tbilisi to Ceyhan—were two: First, it would be one of the longest oil export pipelines in the world, and the engineering challenges over the tall peaks of the Caucasus were enormous. And, second, it was by far the most expensive route. It was very difficult to make the economics work.

As decision time approached, the arguments over the main pipeline became increasingly fierce. The Russians were out to scuttle the project. The Azeris clearly wanted it, as did the Turks. Both pressed BP to push it forward. For a time, it seemed that the United States was most vociferous proponent of all for Baku-Tbilisi-Ceyhan. Its representatives took every opportunity to argue the case, sometimes with a force that surprised and even shocked other participants in the debate. For Washington, the thought that the main export pipeline could possibly go through Russia was unacceptable. The risk was too great.

Madeleine Albright, Bill Clinton’s secretary of state, privately summed up the matter at the time. One afternoon, sitting in a little room on the seventh floor of the State Department, she said, “We don’t want to wake up ten years from now and have all of us ask ourselves why in the world we made a mistake and didn’t build that pipeline.”



“NOW IS THE MOMENT”

For half a decade, an annual conference, the “Tale of Three Seas” (Caspian, Black, and Mediterranean), had been convening in Istanbul each June. It would start in the evening, as the sun went down, in a hillside garden overlooking the Bosporus, with a soothing outdoor concert by what was called the “Orchestra of the Three Seas.” Its music was meant to symbolize the healing of all the historic breaches that needed to be healed, for its members were drawn from the Caucasus and Central Asia and from a number of Arab countries, as well as Israel.

And then, the next day, all the harmonies would disappear as the raucous Caspian Derby began in earnest. Year after year, the conference sessions and the corridors were the scene of agitated arguments and increasingly vocal debate over pipeline routes—and, at least once, a shoving match among very senior people.

The conference dinner, on a warm summer night in June 2001, was held in the Esma Sultan Palace, with a sweeping view over the Bosporus. The speaker was John Browne, the chief executive of BP, now the dominant company among the shareholders of the AIOC. He stressed that the Bosporus simply could not take any more tanker traffic. “The risks of relying solely on this route would become too high. Another solution is necessary,” he said. And that solution was “a new export pipeline”—the Baku-Tbilisi-Ceyhan line.

The oil companies, he announced, were ready to begin the engineering, with the objective of beginning construction as soon as possible. As he made this declaration, almost as if on cue, on the dark historic waters behind him the shadowy silhouette of a large tanker glided by, illuminated only by its own lights. Its silent message seemed to be, How many more of these tankers could the Bosporus take? The pipeline had to be built.

Many obstacles had to be overcome. The first was to convince a sufficient number of the AIOC partners that the pipeline was commercial and get them to sign up for it. Another was the sheer enormity of negotiating so many incredibly complex multiparty agreements that were required to build and operate and finance the pipeline, involving countries, companies, localities, engineering firms, banks, and financing agencies, among other parties. Here the United States played a key role by facilitating an intergovernmental agreement, and myriad other agreements, which otherwise, in the words of one of the company negotiators, would have taken “years to arrange and negotiate.”14

Another continuing obstacle was the opposition of nongovernmental organizations (NGOs) on various environmental and political grounds. Would the pipeline be buried three feet underground, where it was accessible to repairs, or fifteen feet, where it would not be? (Three feet won out.) Much intense debate ensued as to whether the proposed route was a threat to the Borzhomi springs, the source of Georgia’s most famous mineral water. One tense negotiating session with the president of Georgia went on until 3:00 a.m., and then had to be extended another hour when a functioning photocopier could not be found in the presidential palace. The route, in the end, was not changed, but the consortium ending up paying the Borjomi brand water company about $20 million to cover the potential “negative reputational impact” of the pipeline. As it turned out, the reputational impact was surprisingly positive; the head of the Borjomi water company is said to have later described the episode as the best global advertising the mineral water could have ever gotten, and, better yet, it was free advertising.15



“OUR MAJOR GOAL”: PETROLEUM AND THE NATION-STATE

The BTC pipeline has been described as “the first great engineering project of the twenty-first century.” The 1,099-mile-long pipeline had to cross some 1,500 rivers and water courses, high mountains, and several major earthquake fault zones, while meeting stringent environmental and social impact standards. Four years and $4 billion later, the pipeline was finished. The first barrels arrived at the Turkish oil port Ceyhan, on the Mediterranean coast, in the summer of 2006, where they were welcomed in a grand ceremony. It had been twelve years since the “deal of the century” had been signed.

As would be expected, an Aliyev was there at the very forefront among the dignitaries who proclaimed the importance of the day for the countries involved, the region, and the world’s energy markets. But it was not Heydar Aliyev; it was his son Ilham, the new president of Azerbaijan. Heydar Aliyev had not lived to see that day. For Aliyev, the KGB general and Soviet Politburo member who had gone on to become Azerbaijan’s premier “native son,” had passed away three years earlier at the Cleveland Clinic in the United States. But this day was the demonstration that his strategy had worked, that oil—and how he had played it—had given Azerbaijan a future that in 1994 had seemed almost unattainable. Petroleum had consolidated Azerbaijan as a nation and established its importance on the world stage. Or, as Ilham Aliyev had put it before taking over as president, “We need oil for our major goal.” Which was, he said, “to become a real country.”16

Azerbaijan is also strategically important because it is a secular, Muslimmajority state situated between Russia and Iran. Today Azerbaijan’s offshore ACG field—a $22 billion project—ranks as the third-largest producing oil field in the world. Petroleum flows ashore at the new $2.2 billion Sangachal Terminal, just south of Baku, then moves into a forest of pipes and a series of tanks where it is cleaned and prepared for transit. Then the oil, now fit for export, all converges into a single forty-two-inch, crisp white pipeline. That is it—the much-debated Baku-Tbilisi-Ceyhan pipeline. The pipeline extends flat out on the ground for fifty feet and then curves down into the earth and disappears from sight. It bends and twists its way, mostly underground, until it surfaces again, 1,768 kilometers—1,099 miles—later at Ceyhan, where more than a million barrels a day flow into the storage tanks that fleck the Mediterranean shore, waiting for the tankers that will pick up their cargoes and take them to world markets. After all the battles of the great game, all the clash and clamor of the Caspian Derby, all the maneuvering and diplomacy, all the negotiating and trading and deal making, it all comes down to science and engineering and construction—the platforms and oil complexes in the Caspian Sea, and the $4 billion underground steel tubular highway that has reconnected Baku to the global market. As it carries oil, that pipeline also seems to be carrying the cargo of history, connecting not only Baku and Ceyhan but also the beginning of the twenty-first century back to the beginning of the twentieth.

Subsequently, a second pipeline was built parallel to the BTC to carry gas from the offshore Caspian Shah Deniz field, one of the largest gas discoveries of recent decades, to Turkey. The pipeline, known as the South Caucasus Pipeline, was no less challenging technically, but politically a good deal easier. The hard work had been done by the oil line. The South Caucasus Pipeline further consolidated the Caspian with the global energy market.

But Azerbaijan was only part of the Caspian Derby. Another round was being played out across the Caspian Sea.


3

ACROSS THE CASPIAN

In the summer of 1985, spy satellites spinning high above the earth picked up something startling—a huge column of flames on the northeastern corner of the Caspian Sea, with plumes that stretched a hundred miles. It was an oil field disaster on a scale visible from space. A well being drilled—Well 37—in the newly opened oil field of Tengiz, in the Soviet Republic of Kazakhstan, had blown out, sending up a powerful gusher of oil, mixed with natural gas. It had caught fire, creating a flaming column that reached 700 feet or more into the air. The gas was laden with deadly hydrogen sulfide, which inhibited recovery efforts. The USSR Ministry of Oil had neither the capability nor the equipment to bring it under control. At one point the Ministry, desperate and at wit’s end, considered an “atomic explosion” to get the well under control.

That option was never implemented. “We managed to intercede in time,” said Nursultan Nazarbayev, then the republic’s premier.

Eventually American and Canadian experts were recruited to help. It took two months to put out the fire and four hundred days to get the well fully under control. This disastrous and costly blowout underlined the technical challenges facing the Soviet oil industry. But the burning “oil fountain” also illuminated something else: Kazakhstan might have world-scale petroleum potential.1



KAZAKHSTAN AND THE “FOURTH GENERATION” OF OIL

Kazakhstan today, one of the newly independent countries of the former Soviet Union, is a large nation in terms of territory, physically almost the size of India, but with a population of 15.5 million. A little over half is ethnically Kazakh, 30 percent ethnically Russian, and the rest other ethnic groups. With the exception of the new capital Astana, most of the population lives on the periphery of the country; a good part of the country is grassy steppe. During Soviet times, “each of the Union republics occupied a particular place in the division of labor,” as Nazarbayev put it, and Kazakhstan’s role was as “a supplier of raw materials, foodstuffs, and military production.” A quarter of its population had died during Stalin’s famine in the early 1930s. It was where Stalin exiled ethnic groups he did not like, where Nikita Khrushchev unleashed his disastrous “virgin lands” program to try to rescue Soviet agriculture, and where the Soviet Union tested its nuclear weapons. It was the place from whence the Soviet Union launched its spy satellites and where Russia today shoots tourists into space, at $20 million a shot.

Kazakhstan had had a small local oil industry going back to the nineteenth century, an eastern extension of the great Azeri boom that had made the Nobels and the Rothschilds into oil tycoons. If West Siberia had been the giant “third generation” of Soviet oil, then it was expected that Kazakhstan, centered in Tengiz, would be a key part of the “fourth generation.”

But Kazakhstan’s development was held back in the 1980s by lack of investment and technology in the face of difficult and unusual challenges, as evidenced at Tengiz. As former Soviet oil minister Lev Churilov wrote: “Exploration and production equipment stood frozen in time, with few technological advances after the 1960s.” In the effort to bolster the faltering economy and facilitate technology transfer, in the final years of the Soviet Union, Mikhail Gorbachev had tried to lure in foreign investors. Under that umbrella, a controversial American promoter named James Giffen brought together a group of U.S. companies that would serve as an investment consortium. 2



TENGIZ: “A PERFECT OIL FIELD”

One of the companies in the consortium was Chevron, which after looking around the Soviet Union came to focus on Tengiz. The company was deeply impressed by the huge potential. A “perfect oil field” is the way one Chevron engineer described it. With what was finally estimated as at least 10 billion barrels of potential recoverable reserves, Tengiz would rank among the ten largest oil fields in the world.3

There were, unfortunately, a few ways in which it was not quite perfect. One was the problem of the “sour gas,” so-called because of the heavy concentrations of poisonous hydrogen sulfide. Sickeningly noxious with its rotten-egg-like smell, hydrogen sulfide is so toxic in large concentrations that it deadens the sense of smell, potentially dulling the ability of people to respond to inhaling it before it is too late. It would take considerable engineering ingenuity and a good deal of money to solve that problem. Other problems included the generally poor condition of the field and the enormous investment that would be required. There was an additional problem that would come to loom quite large—location. Tengiz was a far-off field with no real transportation system.

In June 1990, the Soviets signed a pact with Chevron that gave the company exclusive rights to negotiate for Tengiz. It was a very high-priority deal. For in the words of Yegor Gaidar, Moscow regarded Tengiz as “the Soviet Union’s trump card in the game for the future.”

But the Soviet Union was experiencing what Nazarbayev called “the distinctive symptoms of clinical death throes. The state organism sank into a coma.” When it collapsed altogether, Nursultan Nazarbayev became president of the independent nation of Kazakhstan. His communist days were over. He was now a nationalist, who would now look not to Marx or Lenin for his role model, but to Lee Kuan Yew and the emergence of modern Singapore. And never again, he said, would Kazakhstan be “an appendage.”

The Tengiz field loomed as absolutely crucial to the new nation’s future; it was what Nazarbayev called the “fundamental principle” underpinning the country’s economic transformation. But it was in very poor shape. In many parts of the oil field, electric power was available only two hours a day. Tens of billions of dollars of investment would be required to bring the field up to its potential.4



THE PIPELINE BATTLE

After arduous negotiations, Kazakhstan and Chevron came to agreement on how the immense and immensely expensive field would be developed. It would be a 50-50 deal in terms of ownership but not in terms of the economics. Eventually, after various costs were recovered, the government take would be about 80 percent of the revenues. Chevron would fund much of the estimated $20 billion investment until Kazakhstan started receiving cash flow, which would fund its share. Nazarbayev hailed this as “truly . . . the contract of the century.” It was certainly a very big deal, with the objective of increasing output tenfold. Extraordinarily complex engineering was necessary to produce from very deep, very high-pressure structures, and then to treat the sour gas and separate the toxic hydrogen sulfide from petroleum.

Geography presented an additional challenge—getting the oil out of the country to world markets. The route was obvious—a 935-mile putative pipeline that would go north out of Kazakhstan, curve west over the top of the Caspian Sea, and then straight west for 450 miles to the Russian port of Novorossiysk on the northern coast of the Black Sea. From there oil would be transshipped by tanker across the Black Sea through the Bosporus Strait and into the Mediterranean. In other words, the pipeline would have to traverse Russian territory.

What was not obvious was how to get it done—not physically, but commercially, and even more so, politically. The battle would be no less contentious than the struggle over the pipelines out of Azerbaijan, no less complicated in the clash of ambitions and politics. It would also be caught up in the complex post–Cold War geopolitical struggle to redefine the former “Soviet space” and the relationships among Moscow, the Near Abroad, and the rest of the world. The players here would include Kazakhstan, Russia, the United States, and, later, China; Chevron and other oil companies; as well as the Persian Gulf oilproducing nation of Oman. Improbably, at the center of it all, at least for a time, was a flamboyant Dutch oil trader, John Deuss, whose penchant for high living included stables with champion jumping horses, two Gulfstream jets, yachts, ski resorts, and a variety of homes. His involvement in Kazakhstan was bankrolled by Oman, with which he had developed a very close relationship.

Chevron, so focused on the Tengiz field itself and also the risks that went with it, had left it to Kazakhstan to finance and organize the pipeline. “We hadn’t planned on building a pipeline,” said Richard Matzke, the head of Chevron Overseas Petroleum. “We felt that the pipeline would be a national asset, and there would be objections to foreign ownership across Russian territory.”

Kazakhstan, still building its institutional capability as an independent nation-state, had turned to Deuss, who, with Oman, would be the “principal sponsor” of the pipeline. What, one might ask, was a Dutch oil trader with Omani money doing trying to build a pipeline across Russia? Deuss had been functioning as a senior oil adviser to the newly independent nation of Kazakhstan and had helped arrange an Omani line of credit for Kazakhstan in its first months of independence. Deuss had won the Kazakhs’ trust. His Omani backer put up the money to initiate what would be called the CPC—the Caspian Pipeline Consortium.

Deuss and Chevron were soon at loggerheads. Chevron now realized that Deuss would be able to extract high tariffs and make a huge profit on the pipeline and also get what he was really after—control of the pipeline. “That wasn’t going forward,” said Matzke.

What followed has been called “one of the most prolonged and bitter confrontations of the era.”

Kazakhstan loomed large to Russia. The two countries shared a 4,250-mile border, and the large ethnic Russian population testified to Kazakhstan’s close links. The Russians resented the growth of U.S. influence in the newly independent states, including in Kazakhstan, and what they saw as an American initiative to cut them out of the action in their natural sphere, the Near Abroad.

More specifically, the Russians regarded Tengiz as “their oil.” They had found it, they had drilled for it, they had begun to develop it, they had put money and infrastructure into it—and it would have been the great new field. But it had been snatched from their hands by the collapse of the Soviet Union. They were determined to extract maximum recompense and ensure that they participated in Tengiz. The two sides were constantly at odds. “It took six years to talk the Russian side round to building the oil pipeline,” recalled Nazarbayev. “The oil lobby in Russia put tremendous pressure on Boris Yelstin to get him to convey the ownership of the Tengiz oil field to Russia. I had many disagreeable conversations . . . about this.”

Once, at a meeting in Moscow, Yeltsin said to Nazarbayev, “Give Tengiz to me.”

Nazarbayev looked at the Russian president and realized that he was not joking. “ Well,” Nazarbayev replied, “if Russia gives us Orenburg Province. After all, Orenburg was once the capital of Kazakhstan.”

“Do you have territorial claims on Russia?” Yeltsin shot back.

“Of course not,” Nazarbayev replied.

With that, the presidents of independent countries, both of whom had risen up together in the Soviet hierarchy, burst out laughing. But Nazarbayev had no intention of giving way. For, if he did, Kazakhstan would have become Russia’s “economic hostage”—and, once again, “an appendage.”5



“THE MAIN THING IS THAT THE OIL COMES OUT”

But with no progress on resolving the ownership and economics of the pipeline, Kazakhstan’s frustration was growing. It needed a go-ahead on oil; its economic situation was desperate. GDP had shrunk almost 40 percent since 1990, and its nascent enterprises could not get international credit. Nazarbayev’s anger over the impasse between Deuss and Chevron mounted. “The problem is that the money has to be invested,” the irate Nazarbayev declared. “What difference is it to me if it is Americans, Omanis, Russians? The main thing is that oil comes out.”6

As it was, the oil was coming out, but only with great difficulty and improvisation. As production rose, Chevron started shipping 100,000 barrels a day by tanker across the Caspian to Baku. Then, what seemed to be the entire Azerbaijani and Georgian rail systems were mustered to move the oil on to the Black Sea. Chevron was also leasing six thousand Russian rail tank cars to move additional oil to the Black Sea port of Odessa, which, to make things more complicated, was now part of Ukraine. Once again, it seemed back to the nineteenth century in terms of logistics. And that just would not do.

John Deuss had a particular patron in Oman, the deputy prime minister. But then this minister was mysteriously killed in an auto collision in the middle of the desert. Thereafter Oman’s support for Deuss dwindled away at remarkable speed. At the same time, Kazakhstan canceled Deuss’s exclusive rights to negotiate for financing for the pipeline. The United States was becoming alarmed at the delay in getting the transportation issue settled and the resulting risks to the financial stability and thus the nationhood of Kazakhstan, which had been very cooperative on a number of issues—most notably in disposing of the nuclear weapons left behind in its territory after the collapse of the Soviet Union. Without the oil pipeline, this particular “newly independent” state was certainly going to be less independent. Having a freebooter—oil trader John Deuss—ending up with control of something so strategic and significant for global energy security as the major export route for Kazakh’s future oil was definitely seen as a problem. Finance would be key to whether Deuss’s plan would go ahead. It became clear that Western loans were never going to be available to finance John Deuss to become the pipeline arbiter of Kazakh oil. With that, Deuss faded out of the picture.

But Moscow still needed to agree to a pipeline running through Russian territory. United States Vice President Al Gore used his co-chairmanship of a joint U.S.-Russian commission to successfully convince Premier Viktor Chernomyrdin that this was in Russia’s interests. It also became very apparent that Russian participation in the project itself would be an asset. Russia’s Lukoil, in partnership with the American company ARCO, came in and purchased a share of Tengiz.

Meanwhile, Kazakhstan had asked Mobil to help put up money for the pipeline. “I finally said we were not going to help on the pipeline in order to help Chevron crude to get out of Tengiz,” said Mobil’s CEO, Lucio Noto. “Tengiz was an absolutely world-class opportunity.” Mobil paid a billion dollars, part of it up front, and bought a quarter of the oil field itself.7

In 1996 a new agreement dramatically restructured the original consortium. The oil companies were now members in a 50-50 partnership with the Russians, the Kazakhs, and Oman. The companies paid for the construction of the new pipeline—$2.6 billion—while Russia and Kazakhstan contributed the right-of-way and such pipeline capacity as was already in place. There was still much that was difficult to get done, including securing the actual route.

Matzke and Vagit Alekperov, the CEO of Lukoil, barnstormed by plane, visiting the interested parties all along the proposed pipeline route. Each stop required a banquet or a heavy reception, which sometimes meant as many as eleven meals a day for the traveling oil men, leaving them stuffed and groggy by nighttime. With the door thus opened, the Caspian Pipeline Consortium had to follow up and go into every locality and to negotiate right-of-way agreements for the new pipeline.8

Nonetheless, in 2001 the first oil from Tengiz passed into the pipeline. This was a landmark. Kazakhstan now, too, was integrated into the global oil industry. In the years that followed, there were many points of contention about Tengiz, which continue to the present day, but they were about the traditional issues—about how much the government’s “take,” or share of revenues and profits, would increase. By 2011 production was up to about 630,000 barrels of liquids per day—ten times what it had been when Chevron had begun to work in the field a decade and a half earlier—and planning was well advanced for the next stage of increase. The difficulties of dealing with the sour gas, laden with hydrogen sulfide, had, however, driven the price tag for Tengiz up from the anticipated $20 billion to more like $30 billion.

Tengiz is not the only supplier into the Caspian Pipeline. Another significant field, Karachaganak, feeds into it, as do other smaller fields.



KASHAGAN

The largest single oil field discovered in the world since 1968 is also in Kazakhstan. This is the immense Kashagan field, fifty miles offshore in the waters in the northeast of the Caspian. The Soviet oil industry had done seismic testing there but did not have the technology to explore the offshore region. In 1997 a consortium of Western companies had inked a deal with the Kazakh government to explore and develop the northern Caspian. In July 2000 they struck oil. Subsequently, Kashagan’s recoverable reserves have been estimated at 13 billion barrels, as big as the North Slope of Alaska.

Kashagan’s potential may be great, but it has also been the subject of continuing contention and discord among the international partners—ENI, Shell, ExxonMobil, Total, ConocoPhillips, and Japan’s Inpex—and between all of them and the Kazakh government. For while Kashagan may be immense, so are its challenges. They dwarf by far those of Tengiz. A whole new production technology has had to be designed for the complex, fragmented field in what has been described as “the world’s largest oil development.” The petroleum resources are buried two and a half miles beneath the seabed, under enormous pressure and suffused with the same dangerous hydrogen sulfide found onshore at Tengiz. After many difficulties and setbacks, and in the face of ballooning costs and much acrimony and debate, the companies had to start over and reallocate roles. The project has taken almost a decade longer than anticipated to complete; first oil is not expected before 2012; and anticipated costs have increased to more than $40 billion for the first phase. All of this has infuriated the Kazakh government, which is having to wait years longer than it had anticipated for Kashagan revenues to start flowing into its treasury. But when Kashagan does start up production, it could add 1.5 million barrels of oil a day to world supplies.9



ONE MORE DEAL

There was one other notable Kazakh deal, though not understood as such at the time. In 1997 China National Petroleum Corporation, a state-owned oil company little known to the outside world at the time, bought most of a Kazakh oil company called Aktobe Munaigas, and committed to build a pipeline to China. Production in 1997 was only about 60,000 barrels a day, but the Chinese have since doubled it. Little attention was paid to that first entry of China into Kazakhstan, and even that attention was mixed with much skepticism about the pipeline and the overall prospects. As one keen observer of Caspian oil was to note almost a decade and a half later, “How wrong we were.”

But, then, centuries earlier a Russian geographer had caught a glimpse of the future. He had written that the people of the steppes would also need to look to the East for the markets for their natural resources.10



TURKMENISTAN AND THE PIPELINE THAT NEVER WAS

One other major source of hydrocarbons was, at least potentially, unleashed by the breakup of the Soviet Union—Turkmenistan. There, too, a plan emerged for major pipelines. It would connect the world in new ways. But that project, too, was complicated and even more contingent, and ever since wrapped in many legends, including that it was part of a grand strategy. In fact, it was much more of a great flyer—a Hail Mary pass of transcontinental proportions.

Turkmenistan sits on the southeast corner of the Caspian, immediately north of Afghanistan. It was highly isolated in Soviet times. Endowed with significant oil resources, it is truly rich in natural gas. This was recognized even in the early 1990s—and even more so today, as Turkmenistan now ranks as the fourth-largest holder of conventional natural gas resources in the world. Immediately after the breakup of the Soviet Union, Turkmenistan managed to earn some money and barter for goods by delivering gas into the Russian pipeline system, just as it had supplied gas to the Soviet system. This was the new country’s major revenue source. But then, in 1993, the Russians abruptly shut down such imports. With their economy in freefall, the Russians did not need the Turkmen gas. Turkmenistan managed to stay afloat economically—just barely—by selling cotton and its limited output of oil.



TAP AND CAOP

Turkmenistan’s entire existing pipeline system, built for the integrated Soviet economy, flowed north into Russia. An alternative export route looked like a very good idea. But given the geography and the neighbors, it was just very hard to see what the alternative route might be. As one Western oil man put it at the time, “Certainly there is no easy way out of Central Asia.” The U.S. government lent support to a project to ship gas from Turkmenistan across the Caspian Sea to Azerbaijan and on to Europe, but that never eventuated.

There was one possibility that recommended itself, but, along with all the other normal inputs of money and engineering capabilities and diplomatic skills, this particular transit route would require something else—very substantial amounts of political imagination. For the envisioned track would take the gas south through Afghanistan and into Pakistan, where some of it would be used domestically and some exported as liquefied natural gas (LNG). The rest would be exported farther south by pipeline into India. Moreover, the proposed 1,040-mile oil pipeline could help move the landlocked petroleum resources of Central Asia south to global markets, closer to Asia, but without having to go through Iran and the Persian Gulf. “Only about 440 miles of the pipeline would be in Afghanistan,” one oil man optimistically said in congressional testimony. And the route had one more decided advantage: it looked to be “the cheapest in terms of transporting oil.”

It was a very big idea that appealed to a company called Unocal, one of the smaller of the U.S. majors. Started as a California company, it had already developed a significant position as a natural gas producer in Southeast Asia, and had also been one of the pioneers of the AIOC, of which it owned about 10 percent. Once the Baku-Tbilisi-Ceyhan Pipeline project got going, recalled John Imle, Unocal’s president, “We asked ourselves, What’s the next project? Turkmenistan had a lot of gas, but all the pipelines were going north, and the Russians were not taking the gas. Our premise was that Central Asia needed an outlet to the Indian Ocean.” So convinced was Unocal of the potential of additional transport routes that it embraced what became a famous slogan, “Happiness Is Multiple Pipelines.”

For Unocal, a project with Turkmenistan could be the game changer, an enormous opportunity that could leapfrog Unocal into the front ranks of international companies. Marty Miller, the Unocal executive with the responsibility for the project, described it as the “moon shot” in the company’s portfolio of possible future projects. It was an $8 billion idea, for it would also be a “twofer”—twin natural gas and oil pipelines. The natural gas line was dubbed the Trans-Afghan Pipeline; and the oil, the Central Asian Oil Pipeline.

Together TAP and CAOP (the latter pronounced as “cap”) would open global markets to Turkmen resources; they would provide significant transit revenues to Afghanistan, an alternative to the revenues that the nation derived from opium cultivation. TAP would deliver natural gas to the growing economies of Pakistan and India, where, the economics indicated, it would be cheaper than imported LNG. CAOP would move a million barrels per day of oil south from Turkmenistan and elsewhere in Central Asia, perhaps even Russia.11

Unocal could already clearly see that the great growth markets of the twenty-first century would be in that region. Yet reflecting the perspectives of the times, the main markets for Turkmen oil were thought to be Japan and Korea. China, as a market at that point, was still little more than a footnote. After all, it was only two years earlier that China had stopped exporting oil and become an importer. The gas project was particularly compelling to some policymakers in India, who hoped that a natural gas link would tie India and Pakistan together with common interests that would help to offset decades of conflict and rivalry. They called it a “peace pipeline.”

To say the project was “challenging” was an understatement.



TURMOIL EN ROUTE

The main transit country for TAP and CAOP was Afghanistan, but Afghanistan in the mid-1990s was hardly a functioning country. For ten years the country had been torn apart by a war between Soviet troops, which had invaded in 1979, and Afghan mujahedeen, supported by Pakistan, the United States, and Saudi Arabia, among others. “The greatest mistake [of the Soviet intervention] was failing to understand Afghanistan’s complexity—its patchwork of ethnic groups, clans and tribes, its unique traditions and minimal governance,” Soviet president Mikhail Gorbachev later said. “The result was the opposite of what we had intended—even greater instability, a war with thousands of victims and dangerous consequences for our own country.” Gorbachev knew of what he spoke. The retreat of the last Soviet troops over the Termez Bridge back into the Soviet Union in February 1989 was the final act in the projection of Soviet military power beyond its borders, and it had failed—that retreat would be a grim landmark on the way toward the collapse of the Soviet Union.12

But, then, with the war over, and the world caught up in both the collapse of communism and the Gulf War, Afghanistan slipped off the international agenda and was forgotten—an omission that would have enormous global consequences a decade later. The country degenerated into civil war and lawlessness as warlords struggled for primacy. In 1994 a group of Islamists—the “students” or “Taliban”—came together as vigilantes to take matters into their own hands and restore order, but also, as it turned out, to establish a very strict Islamic order. They rallied supporters in a campaign against corruption and crime and hated warlords. Very quickly, operating with a cavalry of Toyota pickup trucks equipped with machine guns, they turned themselves into a zealous militia, already battle-hardened by the war against the Soviets. They gained control over much of the southern part of the country, largely dominated by the Pashtuns, which they renamed the Islamic Emirate of Afghanistan.13

There was yet another obstacle to TAP and CAOP—the historic enmity, sometimes punctuated by war, between India and Pakistan, the two countries that were intended to be the main outlet for the gas and oil flowing from Turkmenistan. Their militaries were designed mainly to fight each other, and conflict too often seemed imminent.

Pakistan itself, with its very contentious politics, was in a state of continuous political turmoil. The ISI, the Pakistani security services, was sponsoring the Taliban to pursue what it saw as Pakistan’s own strategic interests—in particular, as a Pashtun buffer against what they feared would be an India-dominated government in Kabul. Events would later demonstrate that this was a mistake of historic proportions. For Al Qaeda and a combined Afghan and Pakistan Taliban would, a decade and half later, challenge the very legitimacy of Pakistan as a nation and seek to destabilize and overturn it and replace it with an Islamic caliphate.



THE “TURKMENBASHI”

In Turkmenistan itself, there was one additional issue: the resources had to be secured. And that meant dealing with one of the most unusual figures to emerge from the collapse of the Soviet Union—Saparmurat Niyazov, the former first secretary of the Turkmenistan Communist Party, who had, after the Soviet breakup, taken over as president and absolute ruler. He had also anointed himself “Turkmenbashi”—“the Leader of All the Turkmen.” His cult of personality rivaled any in the twentieth century. (He once privately explained that it was part of his drive to create identity and legitimacy for the new Turkmen nation.) His picture was everywhere; his statues, plentiful. He renamed the days of the month after his mother and other members of his family, all of whom had been killed in a 1948 earthquake. Niyazov himself had been raised in an orphanage. He had been selected as head of the Community Party in Soviet times after his predecessor was removed because of a nepotism scandal involving many relatives; it was said that Niyazov’s accession was helped because he had no relatives. Once Turkmenistan became independent, Niyazov emptied school libraries, refilling them with his Ruhnama, a rambling combination of autobiography and philosophical rumination on Turkmen nationality. Medical doctors had to renounce the Hippocratic oath and instead swear allegiance to him. He also ordered a reduction in the number of school years for children, banned opera and ballet as “alien,” and prohibited female television news anchors from wearing cosmetics on air.

While highly authoritarian in most ways, Niyazov was rather liberal in one way—and that was with the country’s physical resources. For Turkmenistan was thought to sell the same natural gas to more than one buyer. In this particular case, Unocal thought it had obtained rights to export key gas resources. But so did Bridas, an Argentine company, which had additional support from Pakistan. Unocal worried that Niyazov did not understand, as one Unocal negotiator put it, what was required to “implement a project of such magnitude.”14



HOPE AND EXPERIENCE

Nevertheless, by the autumn of 1995, Unocal had a preliminary agreement with Turkmenistan. Niyazov was in New York City for the fiftieth anniversary of the United Nations, and Unocal organized a signing ceremony at the Americas Society on Park Avenue. The ceremony was immediately followed by a lunch in the grand Salon Símon Bolivar. Dominating the room was a large map of the region, set up on easels, that showed the proposed routes for TAP and CAOP. The lunch was presided over by John Imle, Unocal’s president, a man of some enthusiasm. Struggling to find common ground with the Turkmenbashi, which was not at all an easy thing to do, Imle came up with at least one thing they absolutely and indubitably shared in common—both were fifty-five years old, he declared with a big smile.

The guest of honor was former Secretary of State Henry Kissinger, who was escorted to the map, which he spent some time examining, including the route by which TAP and CAOP would snake down from Turkmenistan through Afghanistan, over the mountains into Pakistan, and then branch to the sea and down farther into India. After the meal, Kissinger delivered the luncheon address. He offered best wishes on the project. He then added his own assessment of its prospects. “I am reminded,” he said, “of Dr. Samuel Johnson’s famous comment on second marriages—that they are ‘the triumph of hope over experience.’ ”

Imle turned a little white. He wasn’t sure if it was a joke or a prophecy.



“NO POLICY”

There was little interest in the project on the part of the U.S. government, which was much more preoccupied with the breakup of the Soviet Union and the other energy initiatives involving Azerbaijan and Kazakhstan and that possible gas pipeline across the Caspian. This mirrored the larger disinterest toward Afghanistan, so different from just a few years earlier, when it had been the last battleground of the Cold War. Once that struggle was over in 1989, the United States just packed up and seemed to forget about Afghanistan and its postwar reconstruction. Much of Afghanistan’s educated middle class was long gone, and the country fell back into battle among the warlords who had led the mujahedeen. As the U.S. ambassador to Pakistan later said, “There basically was no policy” toward Afghanistan in the 1990s.

Unocal recognized that it could not operate in a vacuum. It needed someone to negotiate with—that a condition for the implementation of the pipeline project is “the establishment of a single, internationally recognized entity” running the country that is “authorized to act on behalf of all Afghan parties.” Who would it be ? Trying to implement this transformative project both for the region and itself, Unocal was struggling to understand the competing factions, especially the Taliban. Were the Taliban “pious people” who would bring some order and stability to the chaotic, violence-wracked country? Or were they militants and fanatical zealots with an altogether incompatible agenda?

It often happens that when a U.S. oil company is entering a new country, the company will invite representatives from that country to the United States to tour its facilities and learn more about how the company and the industry operates—and to begin to establish the kind of working dialogue that is required when hundreds of millions and then billions of dollars start getting invested. But in Afghanistan, this was much more challenging than is typically the case. In an effort to build some bonds—“these guys had never seen the ocean,” said Imle—Unocal brought a delegation of Taliban to the United States. Included was a trip to Houston to show them the modern oil and gas industry, and to Washington for a visit to the State Department. But Unocal recognized at the time, “no high level US involvement [had] materialized.” Unocal similarly helped sponsor a visit by the Taliban’s hated rival, the Northern Alliance, that followed the same route. Imle gave a similar message to both groups. “We can only deal with you when you stop fighting, form a government that is representative of all factions, and recognized by the United Nations.” Unocal also gave both sides the same present, a piece of communication technology that was a very practical symbol of the advancing technology of the 1990s—a fax machine. The message to both groups was the same: Stay in touch.15



WHICH SCENARIO?

In the spring of 1996, Unocal examined a report outlining several scenarios, with a range of probabilities, for the future of Afghanistan. None of them were promising. The highest probability was “a continuation of the warlordism scenario.” In another the non-Pashtuns would break off and form their own state, Khorastan, which would orient itself toward Central Asia. There was also a scenario in which Iran and Pakistan would become much more directly involved on the ground in Afghanistan.

The least likely scenario in the report was a “triumphant Taliban.” Under that unlikely scenario, it was thought, the Taliban would need economic development to consolidate its hold and “gain popular support”—which, rationally, would lead it to “seek foreign aid and investment.” But that effort would be hampered by the Taliban’s “major human rights violations in their dealings with women, Shiites, and Tajiks.” Yet a Taliban victory seemed dubious, impeded among other things by factionalism and infighting among the Taliban. But the Taliban’s odds might improve for a variety of reasons, including if it were to “receive a substantial increase in outside assistance without similar increase in support” for the government in Kabul.

One source of support was the ISI, Pakistan’ intellegence agency, which stepped up to offer the Taliban “unlimited covert aid.” But in the spring of 1996, another source materialized. Unbeknownst to most of the world, the virtually unknown Osama bin Laden, avoiding extradition by Saudi Arabia, had moved his retinue from Sudan to Afghanistan and set up shop. He began to substantially bankroll the Taliban. There he also built his own organization, Al Qaeda. It was from his new redoubt in Afghanistan that, in the summer of 1996, he issued his then-obscure fatwah—his “declaration of Jihad against Americans Occupying the Two Sacred Places” and an attack on the Saudi royal family as “the agent” of an alliance of imperialistic Jews and Christians—a document that was faxed to newspapers in London, though with little notice.

Months later, in the largest mosque in Kandahar, Mullah Omar, the oneeyed leader of the Taliban, would, during his sermon, embrace Bin Laden as one of “Islam’s most important spiritual leaders.” 16



THE END OF THE ROAD

By the early autumn, the formerly least likely of the scenarios examined by Unocal now seemed the most likely. On September 27, 1996, the Taliban captured Kabul. They wasted no time imposing their strict version of Islamic law. No cigarettes, no toothpaste, no television, no kite flying. Eight thousand women were summarily expelled from Kabul University, and religious police would beat women pedestrians who were unaccompanied by men.

But the battle for Afghanistan was not over. The Taliban was still at war with the Northern Alliance; the country was not consolidated; and perhaps there was still the opportunity to engage with some factions within the Taliban. At the same time, Turkmenistan president Niyazov was stoking Washington’s alarm by threatening to turn to Iran as a major export market and transport route for Turkmen gas. Toward the end of 1996, Unocal mustered its confidence and, in an effort to build momentum and diplomatic support, announced that, with partners from Saudi Arabia, South Korea, Japan, and Pakistan, it hoped to start building a pipeline by the end of 1998.

But this plan was becoming increasingly problematic. In the United States, the entire project was becoming a target of criticism, including from a movement, which was led by the wife of talk-show host Jay Leno, that attacked Unocal for associating with a regime so repressive of women. Unocal sponsored skill training for Afghan women as well as men. It retained an Islamic scholar to try to communicate with the Taliban what the Koran really said about women, but the Taliban wasn’t interested. “Once we understood who the Taliban were, and how radical, this project didn’t look so good,” said Marty Miller.

Many years earlier, in 1931, a British scholar of Central Asia had observed: “In Afghanistan, both European clothing and unveiling are anathema, and there has been a strong reaction in favor of Islam, the old customs and the old abuses.” That still seemed true 65 years later. The Westerners could not fully grasp how deep-seated were the cultural antagonisms into which they were treading—and how much these antagonisms resonated across history—and what was ahead. Nor did they know how much money Osama bin Laden was already spending on the Taliban—nor what he was brewing in the Afghan city of Kandahar.

On August 7, 1998, two teams of suicide bombers hit U.S. embassies in Kenya and Tanzania. The attacks were highly coordinated, just nine minutes apart. Kenya was worst hit, with 211 dead and 4,000 wounded. The attack had been masterminded from Afghanistan by Al Qaeda. A few days later, the United States retaliated with cruise missiles aimed at a suspected chemical weapons facility in Sudan and at an Al Qaeda training camp in Afghanistan.

“It didn’t take us five minutes to know that it was all over,” said Unocal’s John Imle. “We were in regular contact with the U.S. embassy in Pakistan, and no one had ever said anything about terrorism. But now we understood what Bin Laden was doing in Kandahar.” Imle called Unocal’s chief representative, who happened to be on vacation in the United States, and told him to forget about going back to Islamabad, Pakistan, let alone to Kandahar. It was too dangerous for any U.S. businessman promoting a project that so clearly was anathema to the Taliban. A few months later, instead of starting construction, Unocal declared that it was withdrawing altogether from the project.

Thus, TAP and CAOP were finished before they started. A project that would have opened a wholly new route for Central Asian resources to the great growth market of Asia was never to be. The moon shot never got off the ground. It was aborted before launch by the Taliban and its ally, Al Qaeda, both armed with a militant ideology and a version of religion that was determined to return to the middle ages. 17



What happened in the 1990s—with the offshore field in Azerbaijan and the Baku-Tbilisi-Ceyhan Pipeline, and Tengiz and the Caspian pipeline—was very significant for the supplies they brought to the markets. Today the total output of Azerbaijan and Kazakhstan is 2.8 million barrels of oil—equivalent to more than 80 percent of North Sea production, and four times what they were producing a little more than a decade earlier. But these deals were significant as turning points—for the way in which they redrew the map of world oil, for their geopolitical impact, for the consolidation they provided to the newly independent states, and for the way in which they reconnected the hydrocarbons of the Caspian to the world economy—on a scale that could never have been imagined during the first great boom a century earlier.

More than a decade later, Turkmenistan is still negotiating with Western companies over the development of its natural gas resources. Pakistan is struggling with a domestic Taliban insurgency. And NATO forces, primarily American, are fighting in Afghanistan.


4

“SUPERMAJORS”

Asia had been the target market for TAP and CAOP—the “pipelines that never were.” For Asia was booming. But in July of 1997, one of the most buoyant of the economies, that of Thailand, was slammed by a financial crisis that threatened to destroy much of the country’s recent economic progress. Soon the crisis spread, threatening the whole region and the entire Asian Economic Miracle, with far-reaching impact on global finance and the world economy. It would also detonate a transformation in the oil industry.



THE “ASIAN ECONOMIC MIRACLE”

The title of a popular business book, The Borderless World, captured the abounding optimism about the process of globalization in the 1990s that was knitting together the different parts of the world economy. World trade was growing faster than the world economy itself.1 Asia was at the forefront. The “Asian tigers”—South Korea, Taiwan, Hong Kong, and Singapore, and behind them the “new tigers” of Malaysia, Indonesia, Thailand, and the Philippines, plus China’s Guangdong Province—were emulating Japan’s great economic success.

The Asian Economic Miracle was providing a new playbook for third world economic development. Instead of the inward-looking self-sufficiency and the high trade barriers that had been the canon of development in the 1950s and 1960s, the “tigers” embraced trade and the global economy. In turn, they were rewarded with rapidly rising incomes and remarkably fast growth. Singapore was a beleaguered city-state when it gained independence in 1965. By 1989 its per capita GDP, on a purchasing power parity basis, was higher than that of Britain, which, as the birthplace of the Industrial Revolution, had a twohundred-year head start. Asia also became the foundation for “supply chains,” extending from raw materials to components to final goods. The world was truly being knit together in ways not imagined even a decade earlier.

The high growth rates in Asia meant rising demand for energy, and, specifically, for oil. These countries became the growth market for petroleum, and there was every reason to think that this Asian economic growth would continue at its fevered pace.



JAKARTA: “OPEC’S ECONOMIC STARS”

OPEC petroleum ministers convened for one of their regular sessions in Jakarta, Indonesia, in November 1997. Asia’s buoyant prospects were much on the minds of the delegates. Many of them were considering how to reorient their trade more to the East. Here, after all, it seemed, was their future. But, as if to symbolize how bumpy the road to fast growth could be, they found themselves booked into a not-quite-finished luxury hotel in which the water supply was quite unpredictable.

After four days of discussion in Jakarta, they agreed to raise their production quota by two million barrels per day. This decision was intended to end the wrangling over quotas and overproduction among members. It was read by some as a bet on Asia’s future, but it also had another, much more specific purpose. Some of the countries, notably Saudi Arabia, were quite aggravated that other countries, particularly Venezuela, were producing at their maximum capacity, not at their quotas, and thus taking market share at Saudi Arabia’s expense. Raising the quota at Jakarta would level the playing field. Now all the exporters could officially essentially produce at their maximum. Market conditions seemed to necessitate the increase. World consumption had risen more than two million barrels per day between 1996 and 1997, and the International Energy Agency was predicting that the world’s consumption would rise by another two million barrels per day in 1998. “Price will hold up,” the oil minister from Kuwait said confidently after the decision was announced. “The rise is a very reasonable one.”

That judgment was widely shared. An observer described market conditions as nothing less than “the alignment of OPEC’s economic stars.” But, in the heavens above, the stars were silently moving.2



“ESSENTIALLY ALL GONE”: THE ASIAN FINANCIAL CRISIS

During the course of the Jakarta conference, two of the delegates to the meeting were taken to dinner by the head of the local International Monetary Fund office. He told them in no uncertain terms that the currency crisis that had begun a few months earlier was only the beginning of a far more devastating crisis—and that the Asian economic miracle was about to crash on the rocks. The two delegates were shaken by what they heard. But the decision to raise production, based upon an optimistic economic scenario, had already been taken. It was too late.



“Asia was the darling of foreign capital during the mid-1990s,” and it became the beneficiary of a “capital inflow bonanza,” a great flood of lending by international banks. As a result, Asian companies and property developers had taken on much too much debt—and much of it dangerously short-term and denominated in foreign currency.

It was overleverage in the overheated and overbuilt condo and office building sectors in Bangkok that caused the collapse in July 1997 of Thailand’s currency, which in turn triggered the fall of currency and stock markets in other Asian countries. By the end of 1997, a vast panic was raging over large parts of Asia. Companies tumbled into bankruptcy, businesses closed, governments teetered, people were thrown out of work, and the high economic growth rates gave way to a virtual economic depression in many countries.

At the end of 1997, Stanley Fischer, the deputy director of the International Monetary Fund, hurriedly flew to Seoul. He was taken into the vault of the South Korean central bank so he could see with his own eyes the state of the country’s financial reserves—that is, how much money was left. He was stunned by what he discovered. “It was essentially all gone,” he said.

By then the panic and contagion was spreading beyond Asia. In August 1998, after teetering on the edge of crisis, the Russian government defaulted on its sovereign debt, sending that country into a sudden downward spiral. The ruble plummeted in value, and the Russian stock market fell by an astounding 93 percent. The new Russian oil majors could not pay their workers and suppliers. Salaries were slashed; some of the most senior managers were down to $100 a month.

Wall Street teetered on the edge when the highly levered hedge fund Long-Term Capital Management collapsed. Panic in the United States was averted by fast action by the New York Federal Reserve. In early 1999 the contagion seemed about to sweep over Brazil, threatening what U.S. Treasury Secretary Robert Rubin called an “engulfing world crisis.” An immense rescue effort, mobilizing very large financial resources, was mounted to prevent Brazil from going down. It worked. Brazil was spared. By the spring of 1999, the panic and contagion were over.3



THE JAKARTA SYNDROME

The Asian financial crisis had generated enormous economic ruin. As a result, the assumptions at the end of 1997, embodied in the Jakarta agreement, were all wrong. By implementing the Jakarta agreement, OPEC had been increasing its output—just as demand was falling.

Now there was way too much oil in the world. When there was no more room in storage tanks, seagoing tankers that normally transported oil were turned instead into floating storage. And still there was too much oil. And not enough demand. The price collapsed to $10 a barrel and, for some grades of oil, to as low as $6. These were the kinds of prices that had been seen during the 1986 collapse and had been thought would never be seen again.

The 1997 meeting in Jakarta would be remembered thereafter by the exporters as a cautionary tale—the “Jakarta Syndrome”—the danger of increasing production when demand was weakening or even just uncertain. It was a mistake they intended never to repeat.



THE SHOCK

The price collapse did something else as well. It set off the most far-reaching reshaping of the structure of the petroleum industry since the breakup of the Standard Oil Trust by the U.S. Supreme Court in 1911. The result was something that would have been unimaginable without the circumstances created by the price crash.

As oil prices plummeted, the finances of the oil industry collapsed. “ ‘Bloodbath’ may be an understatement,” said one Wall Street analyst. Companies slashed budgets and laid off employees. One of the major companies shrank its annual Christmas party down to some snacks in the cafeteria. DROWNING in OIL was the load lines a the cover of The Economist. With some exaggeration, that captured what had become the widespread conviction that prices were going to be low for the foreseeable future and that the future of the industry was bleak.4

To some, though, it was an opportunity, not an easy one by any means, but a window through which to get things done. After all, people would still need petroleum, and, indeed, they would need more petroleum when economic growth resumed, which would mean higher prices. But the industry would need to be more efficient, managing its costs better, and leveraging skills and technology across a larger span. That pointed in one direction—toward greater scale. And the way to get there was through mergers.



“WERE HE ALIVE TODAY . . .”

Sanderstolen is a rustic mountain resort in central Norway, reached only by a twisting two-lane highway that has to be laboriously plowed during the winter. In the years after discovery of North Sea oil in Norway’s offshore, it became the venue for the Norwegian government and the oil companies operating in the Norwegian sector to get together and thrash out industry issues—talk in the morning, cross-country skiing in the afternoon.

One morning in February 1998, two investment bankers, Joseph Perella and Robert Maguire, offered a view of the industry that caught the attention of the executives gathered there that year. “The roster of the top publicly traded firms in the oil industry is largely the same as it has been since the breakup of the Standard Oil Trust,” they said in their presentation. “Were he alive today, John D. Rockefeller would recognize most of the list. Carnegie, Vanderbilt, and Morgan, on the other hand, would have difficulty with similar lists for their industries.”

The bankers and their colleagues had been talking about something more than “mergers”—about the imminent emergence of what they had started to call the “supermajors.” For a year, Doug Terreson, an analyst at Morgan Stanley, had been laboring over a paper that declared the “Era of the Super-Major” was at hand. “Unparalleled globalization and scale”resulting from mergers—combined with greater efficiency and a much wider book of opportunities—would lead to “superior returns and premier valuations.” In short, larger companies would be more highly valued by shareholders. And, by implication, those companies that were smaller and less highly valued would be at risk.5

Someone would need to go first. But how could mergers be done? Hostile takeovers looked very difficult to do, so companies would have to agree on a price. There was also a formidable obstacle—what is variously called antitrust in the United States and competition policy in Europe. After all, the most famous antitrust case in history was that involving John D. Rockefeller’s Standard Oil Trust that the Supreme Court had decided in 1911.

Beginning in the mid-1860s, Rockefeller had marched out of Cleveland with “our plan,” a concept for transforming the volatile, chaotic, and individualistic new American oil industry into one highly ordered company, operating under his leadership. “Methodical to an extreme,” in the testy words of a former partner, Rockefeller had proceeded with cold-eyed and single-minded determination, a mastery of strategy and organization, and a bookkeeper’s love of numbers. The result was a massive company, the Standard Oil Trust, that controlled up to 90 percent of the U.S. oil industry and dominated the global market. In doing all this, Rockefeller really created the modern oil industry. He also invented the “integrated” oil company in which the oil flowed within the corporate boundaries from the moment it came out of the ground until finally it reached the consumer.

Rockefeller became not only the richest man in America but also one of the most hated, and, indeed, the very embodiment of monopoly in the robber baron age. In 1906 the administration of the trust-buster, President Theodore Roosevelt, launched the momentous case charging the Standard Oil Trust with restraint of trade under the Sherman Antitrust Act. In May 1911, the U.S. Supreme Court upheld lower court decisions and ordered the Standard Oil Trust broken up into thirty-four separate companies.6

Ever since the dissolution of the Standard Oil Trust, virtually every American law student interested in antitrust has studied that case. And, again and again, in the decades since 1911, the industry had been investigated for allegations and suspicions of colluding and restraining trade. Wouldn’t combinations, creating larger companies, only fan the flames of suspicion? But times had changed. The global playing field was much larger. Altogether, the large international oil companies now controlled less than 15 percent of world production; most of it was in the hands of the national oil companies, which had taken control in the 1970s. Some of these government-owned companies, such as Saudi Aramco, were becoming effective and capable competitors in their own rights, backed up by those immense reserves that dwarfed anything held by the traditional international oil companies.

In order to gain efficiency and bring down costs—and with the approval of antitrust authorities—some of the companies had combined, in key markets, their refineries and networks of gasoline stations. But none of these had sought to overturn the established lay of the land, the demarcations of corporate boundaries so clearly set in place by the 1911 Supreme Court decision.



THE MERGER THAT WASN’T

The chief executive of BP, John Browne, was among those who were convinced that something radical needed to be done. Trained first as a physicist at Cambridge University and then subsequently as a petroleum engineer, Browne had considered a career in academic research. But, instead, he had gone to work in BP, where his father had been a middle-level BP executive, for some time based in Iran. His mother was a survivor of the Auschwitz concentration camp, although this was known only to a very few until after her death in 2000.

Browne had entered BP on what was called an “apprentice program.” He quickly proved himself what the British called a high-flier, moving rapidly up in the organization. In 1995 he became chief executive. He was convinced, he said, that “we had to change the game. BP was stuck as a ‘middleweight insular British company.’ It was either up or out.”

During a BP board meeting, Browne laid out the rationale for a merger: BP was not big enough. It if did not take over another company, it was in danger of being taken over. BP needed to become bigger to achieve economies of scale, bring down costs, and take on larger projects and risks. And it needed the clout that came from scale to be taken “seriously” by the national companies. Browne was apprehensive that the board members would conclude that just one year after choosing him as CEO, he had taken leave of his senses. But, somewhat to his surprise, the board gave a contingent go-ahead.

The best fit for BP seemed to be Mobil, the second-largest of the successor companies to the Standard Oil Trust. In the many decades since the breakup, it had turned itself into one of the largest international integrated oil companies in its own right. It was also one of the most visible. Its flying horse insignia was known around the world; it had invented the “advertorial” in the right-hand bottom corner of the New York Times; and it was one of the biggest supporters of PBS, public broadcasting in the United States, most notably, of Masterpiece Theater. Moreover, BP had already established a joint venture with Mobil in European refining and marketing operations that had saved $600 million and had proved that the two companies could work together.

Mobil’s CEO was Lucio Noto. Known throughout the industry as “Lou,” he had wide international experience and his avocations were notably broad, extending from the opera to rebuilding the engines of old sports cars.

Mobil faced big strategic problems. A significant part of its income came from one source—the Arun LNG project on the island of Sumatra, in Indonesia. But, as Noto put it, “Arun was going downhill.” It was in decline and would require new investment, and that meant that there would be a large gap in profitability until new projects came on stream. This threatened Mobil with its shareholders and would make it vulnerable to a hostile takeover.

The company needed time. “To have one really good upstream asset,” Noto said, “you have to have six projects in the frying pan to bring experience, money, and talent to bear.” Moreover, Mobil’s new growth projects were in Nigeria, Kazakhstan, and Qatar, as well as Indonesia, meaning that the company’s future prospects would be susceptible to geopolitical risks of one kind or another.

Qatar’s vast offshore natural gas field, at the northern end of the Persian Gulf, would be a particular challenge. Because of the field’s immense size, the investment bill would be enormous. “The more we learned about Qatar,” said Noto, “the more we realized that it would be beyond the capacity of a single company.”

“We had to do something,” recalled Noto. “We could survive. But we couldn’t really thrive.”

Mobil was ready to talk to BP. Secrecy was essential. If any news leaked, it would be damaging to the companies involved and could wreak havoc with the stock price. Browne and Noto sketched plans for a two-headed company, with listings on both the New York and London stock exchanges. Finally, after lengthy negotiations and much consideration, Mobil concluded that while BP would be taking over Mobil, there would be no premium to shareholders.

Noto met Browne at the Carlyle Hotel in New York City. His message was very simple: Without a premium, there could be no deal.

“I can’t do it,” Noto said. Browne was stunned. Just to be sure that there was no misunderstanding, Noto handed him a short, carefully drafted “Dear John” letter, which expressed great appreciation for the discussions but made clear, absolutely clear, that they were over.

There was not much else to say as they stood there. But Noto had one other thought. “I don’t know what will happen,” he said.

Browne flew home in silence. What would his own board, which he had worked so hard to convince, think when he broke the news ? Maybe they would conclude that he really had taken leave of his senses .7



THE BREAKOUT: BP AND AMOCO

As soon as he was back in London, Browne called Laurance Fuller, the CEO of Amoco, which was headquartered in Chicago. The former Standard Oil of Indiana, Amoco was one of the largest American-based oil companies. Although its assets were heavily weighted to the United States, it had been one of the pioneering oil companies to go into the Caspian after the collapse of the Soviet Union, and it was now one of the major partners, along with BP, in Azerbaijan.

Fuller and Browne chatted first about the state of their project in Azerbaijan. That was the warm-up. Then Browne popped the question.

“What are your thoughts about the future of Amoco?” Browne asked. “Because it seems to me it’s a good time for a few oil companies to get together.”

Fuller showed no surprise over the phone. Fuller reminded Browne that in the early 1990s, Amoco and BP had discussed combining their petrochemical operations, but BP had broken off the talks.

“What’s new ?” Fuller asked.

“Strategically,” Browne replied, a merger is “something we ought to do.”

“Well, it’s not on my agenda,” Fuller said. “But why don’t we talk?”

“When would be convenient?”

“How about the day after tomorrow ?”

Two days later they met in British Airways’ Concorde lounge at JFK Airport in New York. Amoco had gone through a series of restructurings and major strategy projects to try to find a way forward but without clear success; Fuller, a lawyer who had been CEO for almost a decade, was personally pessimistic about the future of the industry. BP was bigger than Amoco, so it was going to be a 60-40 deal. But the negotiations foundered on structure—whether it would be a two-headed company, with headquarters in both Chicago and London, and whether Fuller would share power with Browne.

In early August 1998, Browne, surrounded by his team, called Fuller from his home on South Eaton Place in London. “This only works if it’s a British company, based in London, and we get one more director on the board,” said Browne. “That’s it.” He asked Fuller to let him know within the next twenty-four hours. Several hours later, Fuller called back. It was a go, he said. He was getting on his plane.

A few days later, August 11, 1998, BP convened a press conference in the largest venue it could find, on short notice, in London—the Honourable Artillery Company, in the city of London—in order to accommodate a huge press corps. It was clear that something very big was about to be announced. London was in the midst of a heat wave, and it was another hot day, blazing hot, and the circuits in the building were overloaded by the temperature and all the television cameras. As Browne stood up to announce the deal, a fuse blew. The whole room went dark. Not an auspicious start for what was, up to that point, the largest industrial merger in history. But the sensational news got out far and wide—a $48 billion merger, a potentially transformative step for the world oil industry. And, although not said publicly, it was what BP needed if it was to become a heavyweight.

The implementation proceeded quickly. The Federal Trade Commission found no major antitrust issues. The businesses of the two companies “rarely overlap,” said the chairman of the FTC, and consumers will continue to “enjoy the benefits of competition.” The BP-Amoco deal closed on the last day of December 1998.8



TOO GOOD TO BE TRUE

John Browne was scheduled to speak in February 1999 at a major industry conference in Houston. Two days before the conference, he called the organizers. He was very apologetic. Something urgent had come up in London and unfortunately he wouldn’t be able to make it. He would send one of his senior colleagues to read his speech in his place.

It was an excuse. The real reason was that Browne was scheduled to be the keynoter on Tuesday, and the keynoter on Wednesday was Michael Bowlin, the president of one of the major U.S. oil companies, ARCO. And Browne could not take the risk of being on the same program with Bowlin, not given what both were then engaged in.

A month earlier, in January 1999, Bowlin had called Browne from Los Angeles, which was ARCO’s hometown. Bowlin had a simple message: “We would like BP to buy ARCO,” he said.

Unlike Browne, Bowlin did appear at the Houston conference. His speech was on the future of natural gas, which was a little ironic: for Bowlin, it seemed, had concluded that oil did not have much future. Bowlin and the ARCO board had lost confidence in the company’s prospects. ARCO’s major asset was its share of the North Slope oil in Alaska, and with oil around $10 a barrel amid the price collapse, management worried that it would not be able to survive.

“It seemed too good to be true,” Browne later observed. ARCO “simply wanted to drop into the lap of BP.” This was a superb opportunity for BP, especially because of the efficiencies that would come through combining ownership and operatorship of their large North Slope oil resources. The North Slope was the largest oil field ever discovered in North America, but its production had fallen from a peak of 2 million barrels per day to a million, and a combined operatorship would save several hundred million dollars a year.9

If ARCO had hung on for another six weeks, it would have seen the beginning of a recovery in its fortune. For, in March 1999, OPEC started to cut back production, which in turn would begin to lift the oil price off the floor. But by then the deal was just about done. The purchase of ARCO for $26.8 billion by BP Amoco (as it was then) was officially announced on April 1, 1999.



“EASY GLUM, EASY GLOW”: EXXON AND MOBIL

The announcement of the BP-Amoco deal the previous August proved to be a historic juncture. The taboo against large-scale mergers had been broken, or so it appeared. Perhaps the greater risk, really, was to not merge.

Lee Raymond, the CEO of Exxon, was at a conference at the Gleneagles golf course in Scotland when the BP-Amoco announcement broke in August 1998. He knew exactly what he should do: get in touch with Lou Noto.

Raised in South Dakota, Raymond had joined Exxon after earning a Ph.D. in chemical engineering in three years from the University of Minnesota. His first jobs were in research. In the mid-1970s, he was drafted to work on a project for the CEO. The oil-exporting countries were nationalizing Exxon’s reserves, and the company needed a strategic direction going forward. Thereafter, Raymond began to play an increasingly key role in reshaping the company. From the mid-1970s onward, the dominant issue for the company had become not only how many barrels of reserves did it have, although that was still critically important, but how financially efficient it was. And how much more financially efficient could it be, compared with its competitors? Success on those criteria would enable it to deliver steadily growing returns to pension funds and all the other shareholders. “The industry had to exist,” Raymond later explained. “If you were the best of the lot, you’ll always be there.”

Raymond became president of Exxon in 1987 and its chairman and CEO in 1993. During the years that Raymond led the company, Exxon’s investment process became known for its highly disciplined and long-term focus. Indeed, Exxon’s “discipline” became a benchmark against which the rest of the industry was measured. The long-term focus meant that it kept its investment very steady, whether the price was high or low. It did not suddenly increase its spending when prices went up or abruptly cut it when prices fell. This reflected Raymond’s own steadiness. One of his favorite maxims, whether in boom times or a price collapse, was “Easy glum, easy glow.” Don’t get overexcited and hyperactive when prices are shooting up, or overly depressed and catatonic when they’re headed down.

But by the mid-1990s, Raymond was coming to the conclusion that financial efficiency in itself had limits. Something more was needed, and that something was a merger. Mobil was a candidate. And as Lou Noto liked to say, “Business is about making something happen.”

A couple of months after the breakdown of negotiations with BP, Noto had run into Lee Raymond at a conference. After chatting about various challenges facing the industry, Raymond had said, in his own steady, measured way of speaking, “Something will happen.” Not long after, Raymond phoned Noto and said he was coming to Washington and hoped they could have lunch. Sure, Noto replied. Afterward, Noto happened to ask what would be bringing Raymond to Washington.

“To have lunch with you,” he was told.

On June 16, 1998, over the meal at Mobil’s headquarters in Fairfax, Virginia, Raymond turned to the immeadiate subject of the joint venture they shared with a Japanese company. Eventually they got to the subject of combining their own companies. They concluded that three questions would have to be answered in the affirmative: First, could they work out a satisfactory deal? Second, would such a deal win the approval of the Federal Trade Commission in the United States and the competition directorate at the European Union in Brussels? The third was the most daunting: “Were we wise enough to mold one organization out of two businesses?” A number of closely held conversations followed. But it became apparent that the two companies were far apart on the all-important question of valuation; that is, on what premium would be paid to Mobil shareholders. The discussions petered out. On August 6, Noto told the Mobil board the he and Raymond “had mutually agreed to discontinue discussions.”

Five days later, BP and Amoco announced their merger.

As soon as Raymond heard the news, he placed that call to Noto from Gleneagles. The valuations in the BP-Amoco deal provided an external yardstick for resolving their differences on the relative prices of Exxon and Mobil shares.

“Your neighbor just sold his house,” is the way Raymond put it. “And now we have another benchmark for what houses are selling for.”

The two companies quickly moved into overdrive on negotiating what was code-named “Project Highway.” A key decision was to create a wholly new structure so that it would be a new company for everybody.

Antitrust was a major concern. BP’s combining with Amoco was one thing. Exxon and Mobil was quite another: it would be a much bigger company, and it would bring together the two largest companies to have emerged from the 1911 breakup of the Standard Oil Trust, which meant it would be a very big news story—and a much bigger subject for regulators.

Noto was deeply worried about the impact on Mobil if they tried to do a merger and it failed because of rejection by the Federal Trade Commission. “Exxon would be okay,” said Noto, “but we would be dead meat.”

But Raymond reassured him. “This merger is going to happen,” said Raymond. “Come hell or high water.”

There was an unwritten understanding within the fraternity of antitrust lawyers that 15 percent of the total U.S. gasoline market was the limit that the FTC would allow for any combination, and this deal would fall below that line.

But what immediately preoccupied the two sides was the third question—getting to a valuation and then figuring out who would own what share. Months of hard negotiation followed, often conducted by Raymond and Noto with just a couple of aides. Finally, on the evening of November 30, the two CEOs came to agreement: Exxon would account for 80 percent of the new company, and Mobil, 20 percent. (This proportion was remarkably similar to their relative proportions in the original breakup of the Standard Oil Trust in 1911.) Mobil’s shareholders would get about a 20 percent premium on their stock. The negotiations were very intense; indeed, so intense that the final valuation on a share of stock went out to six decimal places.

On December 1, 1998, even before the FTC had ruled on the BP-Amoco deal, Exxon and Mobil announced their intention to merge. It was a very big deal. “The New Oil Behemoth,” headlined the New York Times.

At the huge press conference presenting the deal, Noto was asked if it was true that, prior to this deal, there had been discussions with BP and other companies. Noto looked out on the audience with what seemed a very long pause.

“I’ll tell you what my mother told me,” he said. “That you never talk about your old flames on the day you announce your engagement.”

The room erupted in laughter. In general, the managements of the two companies were prepared for just about every question during the press conference—except for one. What would happen, Raymond was asked, to Mobil’s longtime support of Masterpiece Theater on Friday nights on PBS? He uncharacteristically fumbled for an answer.

At another press conference a few hours later, he was asked the same question. This time he answered with a strong affirmation about continuing the commitment. As a follow-up, he was asked what had changed since the previous press conference.

“I talked to my wife,” Raymond said.10



THE GHOST OF JOHN D. ROCKEFELLER

But there remained a huge potential barrier to these deals, and that was the U.S. government—specifically the Federal Trade Commission, which would rule whether they violated antitrust laws. The spirits of John D. Rockefeller and the 1911 U.S. Supreme Court hovered over the consolidations that were transforming the industry, but the world had changed enormously in the years since.

The FTC’s focus was predominantly on refining and the networks of gasoline stations and whether any of the companies would have undue market power, which meant the ability to control the price, in the words of the FTC, “even a small amount.” What was of “intense interest” to the regulators was pricing in the downstream—that is, the cost of fuel coming out of the refineries and gasoline at the pump.11

But the central rationale of these deals was not about refining and marketing—the downstream—in the United States. It was about the global upstream—exploration and production of oil and gas around the world. The companies were seeking efficiency and cost reduction—the ability to spread costs over a larger number of barrels. No less important was the quest for scale—the ability to take on larger and more complex projects (Lou Noto’s “six projects in the frying pan”)—and the ability to mobilize the money, people, and technology to execute those projects. Also, the bigger and more diversified the company, the less vulnerable it was to political upheavals in any country. Such a company could take on more and bigger projects. It was already clear that projects themselves were getting larger. A megaproject in the 1990s might cost $500 million. In the decade that was coming, they would be $5 billion or $10 billion or even more. The BP-Amoco deal sailed through the FTC in a matter of months with only minor requirements for divestiture. But Exxon-Mobil was of entirely different scale—much larger. And just to mention together the names of the two largest legatees of the original Standard Oil Trust seemed enough to evoke the ghost of John D. Rockefeller.

The FTC launched an enormous probe into the proposed merger, in cooperation with twenty-one state attorneys general and the European Union’s competition directorate. As part of its investigation, the FTC mandated the largest disclosure project in history, which altogether required millions of pages of documents from the two companies from operations all over the world, ranging from refinery operations in the United States to a decade’s worth of documents on all lubricant sales in Indonesia. It took almost a year, but finally the FTC came to its decision. In order for Exxon and Mobil to merge, they had to divest 2,431 gasoline stations, out of a total of about 16,000, and one oil refinery in California, plus a few other things. But to those who feared the reincarnation of John D. Rockefeller, the FTC replied that this was not 1911 but rather a very different world. The Standard Oil Trust, explained FTC chairman Robert Pitofsky, “had 90 percent of the U.S. market, while this company after the merger will have about 12 or 13 percent”—below that unstated 15 percent limit. On November 30, 1999, ExxonMobil came into existence as one company.

But at the same time, Pitofsky sent out a warning: a high degree of market concentration would “set off antitrust alarms.”12



THE ALARMS

Those “antitrust alarms” had already been set off by BP’s bid for ARCO. BP-Amoco had moved very fast with its ARCO deal—too fast for the FTC, as it turned out. After a heated internal debate, the commission, by a 3-to-2 vote, decided that the absorption of ARCO would enable BP to manipulate the price of Alaskan oil sold into the West Coast and keep “prices high.” What did “high” mean? According to the mathematics of the FTC’s witness, a combined company would have been able to increase the price of gasoline by about half a cent a gallon for a few years.

In the view of the majority at the FTC, BP had overreached, and before it could close the deal, it would be required to divest the premier asset, the crown jewel, the whole reason that it had wanted ARCO in the first place—the North Slope oil. A chastened BP realized that it had no choice. It proceeded to close the deal in April 2000, but without the North Slope.

The director of the FTC’s Bureau of Economics, writing afterward about the deal, offered a considered judgment that extended to the other mergers of the era: “It is fair to say that in each of these cases, the companies agreed to divestitures that went well beyond what many believed were necessary to protect competition.” But politics, the inherent suspicion of the oil industry, and the sense that the mergers were coming too fast—all these were decisive factors. 13



THE FRENCH RECONNECTION : TOTAL AND ELF

Not everyone depended upon the approval of the Federal Trade Commission. In France, what counted was the assent of the prime minister.

France had two major oil companies, Total and Elf, both of which had been state controlled but were now fully privatized. The reason for the two companies was, as Thierry Desmarest, then Total’s CEO, put it, a “historical accident.” After World War II, France’s president, General Charles de Gaulle, was intent on restoring French “grandeur.” He decided that Total, or CFP, as it was known at the time, was “too close to the American and British companies,” and he orchestrated the creation of a second French company, a new national champion, which eventually became Elf.

“We were already convinced at the time of the BP-Amoco deal of the need to grow through consolidation,” recalled Total’s Desmarest. When we heard about the BP-Amoco deal, it confirmed for us intellectually that we had to consolidate, that we had to grow.”

The first step, at the end of 1998, was to acquire the Belgian oil company Petrofina, which was primarily a European downstream company. By June 1999, Total had worked out a takeover plan for its main target, Elf. By Friday lunchtime, on July 2, a few senior Elf executives were hearing worrying rumors that Total was about to move.

But nothing could happen without the advance approval of the government. Although Elf had been privatized in 1986, the government still held what was called a “golden share,” which gave it a veto over any change of control. Even if there had been no golden share, for a French company to proceed without a green light from the French government would have been career destroying for the managements involved.

The first person who needed to be convinced was Dominique Strauss-Kahn, the finance minister. An economist by profession, Strauss-Kahn quickly understood the competitive economic imperatives of consolidation. Worse, if the French companies did not merge, one of them might well be absorbed by a non-French company, which would be “un suicide politique” for any government that allowed it to happen.

The French prime minister, Lionel Jospin, was another matter. A onetime Trotskyite and one of the founders of the modern French Socialist Party, he was not at all familiar with the oil business and its circumstances. It was made clear to Desmarest that he would personally have to make the case to the prime minister about “the importance to France” of a merger.

Time was very short, as Total was on the very eve of launching its takeover bid. But the prime minister was in Moscow.

On Friday evening, Desmarest flew to Moscow and went directly to the National Hotel, opposite the Kremlin, for a middle-of-the-night meeting with the prime minister and Finance Minister Strauss-Kahn. Desmarest set about explaining the urgency, given what was happening with BP and Amoco, and Exxon and Mobil, and with the national oil companies. “Isn’t this just a matter of the egos of the CEOs?” asked the prime minister. Desmarest was prepared to answer the question. But under the circumstances, he judged it wiser to leave that particular answer to Strauss-Kahn. The finance minister, a former economics professor, gave the prime minister a short and persuasive lecture on the economic reality and global competitive dynamics that made a deal essential for French national interest. The French prime minister absorbed the lesson. He gave the requisite green light.

By Saturday morning, Desmarest was back in Paris, where the team was putting the last touches on the offer. On Monday, Total launched its takeover bid for Elf. The Elf CEO, Philippe Jaffré, was shocked. Elf mounted a counteroffer ; it would take over Total.

In the war for shareholder support, the battle was on. Despite the bitter accusations back and forth, the two sides were privately exchanging plans, since it was foreordained that there would be a merger, and a single French company would emerge out of the struggle. With that in mind, Desmarest and Jaffré worked out a private understanding: neither would personally attack the other publicly, since one of them would actually have to run the combined company.

In September 1999 the deal was done. TotalFina took over Elf, and Desmarest became CEO of the combined company. After a short while, TotalFinaElf would come to be known simply as Total, one of the world’s supermajors.14



“WE HAD TO CONSOLIDATE”: CHEVRON AND TEXACO

For Chevron, the former Standard of California and the nation’s third-largest oil company, it was the Exxon-Mobil merger that had really galvanized action. “ What surprised me of all of the deals was Mobil’s selling themselves to Exxon,” said David O’Reilly, who would later become CEO of Chevron. “I thought of Mobil as a sizable company, with a good portfolio, and good growth prospects.”

For Chevron, the obvious partner was Texaco, with which it shared the Caltex joint ventures—oil production in Indonesia, refining and marketing throughout Asia, now the fastest-growing market in the world. These joint ventures were five decades old and considered among the most successful such operations involving any kind of companies in the world.

A merger made the same sense to Texaco. The larger companies, the supermajors, would indeed have a higher stock market valuation than the traditional majors. In the spring of 1999, Texaco reached out to Chevron.

The companies secretly dispatched teams to rendezvous in Scottsdale, Arizona. After several days, they concluded that the fit would be excellent. But this would be no merger of equals. Texaco had gone through difficult times. It had lost a $3 billion lawsuit to an independent oil company, Pennzoil, and then, to fend off a hostile takeover from the financier Carl Icahn, it had taken on billions more in debt. As a result, it had to sell its Canadian subsidiary and slash its exploration budget, which would have painful consequences. “It’s a pretty simple rule,” said William Wicker, then CFO of Texaco. “If you cut your exploration budget in Year Zero, you’re not growing in Year Seven and Eight.” Texaco had just started to invest again, but the impact would be years away. Texaco was still a very big company, but Chevron was nearly twice as large and would be the acquirer.

While there was a good fit between the companies, the same could hardly be said of the two CEOs, Chevron’s Kenneth Derr and Texaco’s Peter Bijur. At best, the relationship between them was frosty. Moreover, the two sides could not agree on price, and the discussions broke down. Texaco, Bijur said, was developing a strategy that would get back on a solid growth course.

In the autumn of 1999, Derr retired. The new CEO, David O’Reilly, had been hired by Chevron many years earlier directly out of University College, Dublin, and was immediately dispatched to its Richmond, California, refinery. Now, as CEO, he devoted his first strategy meeting to relaunching a merger plan. “I had already known,” recalled O’Reilly, “that we had to consolidate because otherwise we’d become less relevant and marginalized compared with the competition. You have to be committed and have the stomach to go after assets even in lean times.”

O’Reilly asked for his board’s authorization to pursue a merger. The board’s reply was pretty clear: Yes. And the sooner the better.

Over the years, O’Reilly had become known for his unusual ability to connect with all sorts of people. Now his immediate job was to reconnect with Peter Bijur, the Texaco CEO. The senior managements of the two companies met in San Francisco in May 2000 to review their two Caltex joint ventures in Asia. It was clear that the joint venture structure was a very inefficient way to run such an important—and growing—business in the most dynamic growth region in the world. They needed to change it. At the end of the meeting, O’Reilly suggested to Bijur that they talk privately and then brought up the subject of a merger. Bijur allowed that Texaco’s go-it-alone strategy was going to be hard going in the new business environment. Negotiations were reopened. The Chevron-Texaco merger was finally signed in October 2000. As Bijur somewhat ruefully summed it up, “It’s apparent that scale and size are important as the supermajor oil companies have come on the scene.”15



THE LAST ONES STANDING: CONOCO AND PHILLIPS

The FTC decision in the spring of 2000, forcing BP to divest ARCO’s North Slope assets, inadvertently helped foster the last major merger in the United States. On one side was Phillips Petroleum. Headquartered in Bartlesville, Oklahoma, Phillips was regarded as a mini-major. On the other side was Conoco, which had been owned by the DuPont chemical company since 1981. DuPont had constrained Conoco’s spending and growth, using the profits from oil and gas to build up its life-sciences business. When Archie Dunham became CEO in 1996, he later said, “My number one objective was to free the company from DuPont.” He convinced DuPont that liberating Conoco would be a very good deal for DuPont’s shareholders. On Mother’s Day, May 11, 1998, DuPont announced that it would begin selling off the company.

When the first 20 percent was sold, it constituted the largest IPO in U.S. history until that point. The company took as its mantra “Think big and move fast.” It celebrated the efficiencies that came from being nimble and keeping a direct “line of sight” from top management down into the front line of operations—not possible in a company with the scale of a supermajor. Its television commercials featured agile, nimble cats, which was said to be irritating to the much bigger Exxon, whose own emblem was a tiger.

But there were two obvious risks. One came from being in the position of being able to bet only on three or four big projects, instead of ten or fifteen. The second was the danger of being absorbed in a hostile takeover. Phillips faced the same risks. And these were not theoretical risks. After all, the reason Conoco had fallen into DuPont’s arms in 1981 was to ward off hostile bids. And later in the 1980s, Phillips had been the target of hostile tenders by both T. Boone Pickens and Carl Icahn. And, thus, Dunham and Phillips’s CEO, James Mulva, had begun discussing a possible combination in 2000. But the talks had foundered in October 2000.

Instead, the two companies went head to head as finalists in bidding for the Alaskan assets that BP and ARCO had to shed in order to consummate their merger. Phillips was the winner. That meant a strategic transformation. For the acquisition doubled its reserves and gave it a bulk that made it commensurate with Conoco in size. But how were talks to get going again?

During World War I, the state of Oklahoma had run short of money and, as a result, had left its capital’s building in an embarrassingly unfinished condition—that is, without a dome. Eighty-five years later, in June 2001, a celebration was being held in Oklahoma City for a newly built dome that was to be hoisted atop the capitol. Both Phillips and Conoco were financial contributors to this historic rectification, and the two CEOs, Dunham and Mulva, both in town for the event, ran into each other in the lobby of the Waterford Hotel.

“We need to talk again,” said Mulva.

Months of negotiations followed. In November 2001, the two companies announced their merger, creating ConocoPhillips, the third-largest oil company in the United States with, in fact, the largest downstream system in the nation. Dunham become chairman. Mulva, who was now the CEO of the combined company, was very clear as to the purpose of this merger: “ We’re going to do this so we can compete against the biggest oil companies.”16



STANDING ASIDE : SHELL

One company was notably absent from the fray, Royal Dutch Shell, which had been, prior to the mergers, the largest oil company of all. There were several reasons. An internal analysis had concluded that the long-term oil price would be determined by the cost of new non-OPEC oil, which it pegged at $14 a barrel ; and so it used a $14 oil price to screen investments. It had also concluded that size mattered, but only up to a certain threshold. But there was a still more important reason—the structure of the company itself.

When Mark Moody-Stuart would introduce himself at conferences, he would say, “I’m the chairman of Shell. I’m also the closest thing you’ll ever see to a CEO of Shell.” That was the problem. Shell had a unique structure. Although it operated as one company, it was actually owned by two separate companies with two separate boards—Royal Dutch and Shell Transport and Trading. It had no CEO; it was run by committee. This was the compromise reached to carry out a much earlier merger, in 1907, and then modified in the late 1950s. This “dual structure” had worked well for many decades, but had become increasingly inefficient. The dual ownership also made it “very difficult,” as Moody-Stuart put it, to do a stock-based merger with another large company. In fact, it had made such a merger virtually impossible. During the merger years, Moody-Stuart had tried to push through an internal restructuring , but the reaction from many of the directors was, as he said, “quite stormy.” 17 Nothing happened. After all the mergers were done, Shell was no longer the largest oil company.



What had unfolded between 1998 and 2002 was the largest and most significant remaking of the structure of the international oil industry since 1911. All the merged companies still had to go through the tumult and stress of integration, which could take years. They all came out not only bigger but also with greater efficiencies, more thoroughly globalized, and with the capacity to take on more projects—projects that were larger and more complex.

Looking back a decade later on the consolidation, on this earthquake in the industry structure, Chevron CEO David O’Reilly observed, “A lot of it has played out as was expected. The part that hasn’t quite played out relates to the national oil companies. Are these larger companies competitive with the national oil companies?”18

When a minor corner of the world economy—the overleveraged Bangkok commercial real estate market—began to convulse, and the overvalued Thai baht began to plummet from speculative attacks, no one expected that the consequences would lead to an Asian, and then a wider global, financial crisis. Certainly none of the managements of the world’s major oil companies would ever have expected that the distress of this rather obscure Southeast Asian currency would trigger a collapse in the price of oil and the massive restructuring of their own industry. Yet more was to come. For the consequences would also transform national economies and countries, including one of the world’s most important oil producers.


5

THE PETRO-STATE

For oil-importing countries, the price collapse was a boon to consumers. Low prices were like tax cuts. Paying less for gasoline and home-heating Boil meant that consumers had extra money in their pockets, which was a stimulus to economic growth. Moreover, low oil prices were an antidote to inflation, allowing these countries to grow faster, with lower interest rates and less risk of inflation.



CRISIS FOR THE EXPORTERS

What was a boon for the consumers was a disaster for the oil producers. For most of them, oil and gas exports were the major source of government revenues, and the petroleum sector was responsible for 50 or 70 or 90 percent of their economies. Thus, they experienced sudden large drops in GDP. With that came deficits, budget cuts, considerable social turmoil, and, in some cases, dramatic political change.

The most dramatic change of all would be in Venezuela. Because of the scale of its resources, Venezuela could be described as the only OPEC “Persian Gulf country” not actually in the Persian Gulf. In 1997 it was actually producing more petroleum than either Kuwait or the United Arab Emirates, and almost as much as Iran. Its position in the Gulf of Mexico and its role as a Western Hemisphere producer made it a bulwark of U.S. energy security, as it had been going back to World War II. But Venezuela had also become the very embodiment of what is called a petro-state.

The term “petro-state” is often used in an abstract way, applying to nations that differ widely in everything—political systems, social organization, economy, culture, religion, population—except for one thing: they all export oil and natural gas. Yet certain common features do make the petro-state a useful lens. The common challenge for these exporters is to ensure that the opportunities for longer-term economic development are not lost to economic distortion and the ensuing political and social pathologies. That means having the right institutions in place. It is very challenging.

Venezuela’s national saga illuminates the difficulties.

“The Venezuelan economy since 1920 can be summed up in a word: oil,” the economist Moises Naim has written. Prior to that, it had been an impoverished, underpopulated, agricultural nation—a “cocoa-state” and then a “coffeestate” and “sugar state”—highly dependent on those commodities for its national income, such as it was. Local caudillos ran their little fiefdoms as if they were their own countries. Of the 184 members of the legislature in the mid-1890s, at least 112 claimed the rank of general. Afflicted by innumerable military coups, Venezuela was ruled by a series of dictators, such as General Cipriano Castro, who after taking power in 1900, proclaimed that he was “the man raised by God to fulfill the dreams of Bolivar” and reunite Venezuela, Colombia, and Ecuador as a single country. He was soon pushed aside by another general, Vicente Gómez, who ruled the country as his “personal hacienda” from 1908 until his death in 1935.1

The decisive event for Venezuela’s fortunes came in 1922. The giant Barroso well in the Maracaibo basin blew out with an uncontrolled flow of 100,000 barrels a day. (It was discovered by the same engineer, George Reynolds, who in 1908 brought in the first oil well in Iran.) With the Barroso gusher, Venezuela’s oil age had begun. Thereafter, increasing wealth poured into the country as more and more oil flowed out of the ground.

Yet why did Juan Pablo Pérez Alfonso, the influential energy minister after the restoration of democracy in 1958, and one of the founders of OPEC, decry petroleum in his retirement years as “the excrement of the devil”? It was because he saw the impact of the influx of revenues on the state, the economy, and society, and the psychology and motivations of the people. The oil wealth could be wasted; it could distort the nation’s life. In his view, Venezuela was already becoming a petro-state, a victim of the alluring and malevolent “resource curse.”2



THE “REVERSED MIDAS TOUCH”

In the 1980s and 1990s, oil could generate more than 70 percent of Venezuela’s central government’s revenues. In a petro-state, the competition for these revenues and the struggle over their distribution becomes the central drama of the nation’s economy, engendering patronage and clientelism and what is called “rent-seeking behavior.” That means that the most important “business” in the country (aside from oil production itself) is focused on getting some of the “rents” from oil—that is, some share of the government’s revenues. Entrepreneurship, innovation, hard work, and the development of a competitively oriented growth economy—all these are casualties of the system. The economy becomes inflexible, losing its ability to adapt and change. Instead, as the edifice of the state-controlled economy grows, so do subsidies, controls, regulations, bureaucracy, grand projects, micromanagement—and corruption. Indeed, the vast amounts of revenues connected with oil and gas create a very rich brew for corruption and rent seeking.

A group of Venezuelan academics summed up the problem this way: “By the middle of the twentieth century, there was already a deeply rooted conviction that Venezuela was rich because of oil, because of that natural gift that does not depend on productivity or the enterprising spirit of the Venezuelan people.” They added: “Political activity revolved around the struggle to distribute the wealth, rather than the creation of a sustainable source of wealth that would depend upon the commercial initiatives and the productivity of the majority of the Venezuelan people.”3

The petro-state and its attendant resource curse have two further characteristics. One is called the Dutch disease. The term describes an ailment that the Netherlands contracted in the 1960s. Around that time, the Netherlands was becoming a major natural gas exporter. As the new gas wealth flowed into the country, the rest of the Dutch economy suffered. The national currency became overvalued and exports became relatively more expensive—and, thus, declined. Domestic businesses became less competitive in the face of the rising tide of cheaper imports and increasingly embedded inflation. Jobs were lost and businesses couldn’t survive. All of this came to be known as Dutch disease.

A partial cure for the disease is to segregate some of these earnings. The sovereign wealth funds that are now such important features of the global economy were invented, in part, as preventative medicine—to absorb this sudden and/or large flow of revenues and prevent it from flooding into the economy and thus, by so doing, insulate the country from the Dutch disease.

The second, even more debilitating ailment of the petro-state is a seemingly incurable fiscal rigidity, which leads to more and more government spending—what has been called “the reversed Midas touch.” This is the result of the variability of government revenues, owing to the volatility of oil prices. When prices soar, governments are forced by society’s rapidly–rising expectations to increase their spending as fast as they can—more subsidies to hand out, more programs to launch, more big new projects to promote. While the oil can generate a great deal of revenues, it is a capital-intensive industry. This means it creates relatively few jobs, adding further to the pressure on governments to spend on projects and welfare and entitlements.

But when world oil prices go down and the nations’ revenues fall, governments dare not cut back on spending. Budgets have been funded, programs have been launched, contracts have been let, institutions are in place, jobs have been created, people have been hired. Governments are locked into ever-increasing spending. Otherwise they face political backlash and social explosions. The governments are also locked in to providing very cheap oil and natural gas to their citizens as an entitlement for living in an energy-exporting country. (In 2008 gasoline in Venezuela went for about eight cents a gallon.) This leads to wasteful and inefficient use of energy, as well as reducing supplies for export. A government that resists the pressures to spend—and increase spending—puts its very survival at risk.

There are easier ways than cutting spending to alleviate the “reversed Midas touch.” But they work well only in the short term. One way is by printing money, which leads to high inflation. Another is by international borrowing, which keeps the money flowing. But that debt needs to be serviced and repaid, and as the debt balloons, so do the interest payments, leading of potential debt crises.

In the petro-state, no constituency is in favor of adjusting spending downward to the lower levels of income—except for a few economists who understandably become very unpopular. On the contrary, across society most hold the conviction that oil can solve all problems, that the tide of oil money will rise forever, that the spigot from the finance ministry should be kept wide open, and that the government’s job is to spend the oil revenues as fast as possible even when more and more of those revenues have become a mirage.

As Ngozi Okonjo-Iweala, former finance minister and foreign minister of Nigeria, summed it up: “If you depend on oil and gas for 80 percent of government revenues, over 90 percent of exports are one commodity, oil, if that is what drives the growth of your economy, if your economy moves up and down with the price of oil, if you have volatility of expenditures and of GDP, then you’re a petro- state. You get corruption, inflation, Dutch disease, you name it.”4

While these are the general characteristics that define a petro-state, there are wide variations. The dependence on oil and gas of a small Persian Gulf country is obvious, but its population is also small, which reduces pressures. And it can insulate itself from volatile oil prices through the diversified portfolio of its sovereign wealth fund. A large country like Nigeria that depends heavily on oil and natural gas for government revenues and for its GDP has much less flexibility. Spending is very difficult to rein in.

There is also a matter of degree. With 139 million people and a highly developed educational system, Russia possesses a large, diversified industrial economy. Yet it does depend upon oil and natural gas for 70 percent of its export revenues, almost 50 percent of government revenues, and 25 percent of GDP—all of which means that the overall performance of its economy is inordinately tied to what happens with the price of oil and gas. And while Russia is much more than a petro-state, it has some of the characteristics of a petro-state—from which it can benefit and with which it must contend—and which generates a constant debate about how to diversify the economy away from oil and gas.



“WE COULDN’T LOSE TIME”

But it is Venezuela that is as identified as any nation with the very idea of the petro-state. And it was Carlos Andrés Pérez who embodied the petro-state—at least the first time around. His first term as president of Venezuela was during the height of the oil boom in the 1970s, when revenue far greater than anyone had ever contemplated was flowing into the national treasury. As a result of the quadrupling of the oil price in 1973–74, he had gained, on an annualized basis, four times as much money to spend as his immediate predecessor. And he was determined to spend it. “We are going to change the world!” he would say to his cabinet. Venezuela’s human capital made the ambitions more credible. Even before the price increases, the government was taxing the oil companies as much as 90 percent, and as part of the policy of “sowing the oil,” a good deal of money had been spent on education, and as a result, Venezuela had an educated and growing middle class.

As much as anyone, Pérez was the architect of what became the modern Venezuelan petro-state, “the kingdom of magical liquid wealth.” Some called it “Saudi Venezuela.” Pérez proclaimed his vision of Le Gran Venezuela, an increasingly industrialized, self-sufficient nation that would march doubletime, fueled by oil, to catch up with the developed countries. Oil had “given us,” he said, the opportunity to “pull Venezuela out of her underdevelopment . . . We couldn’t lose time.”

In 1976 Pérez engineered the government takeover of the oil industry, in accord with the great wave of resource nationalism that was sweeping the developing world in that decade. But Venezuela carried out its nationalization in a careful and pragmatic way. Considerable talent had been built up throughout the industry during the years that the international majors ran the sector. Prior to nationalization, 95 percent of the jobs in the industry, right up to the top management, were held by Venezuelans. So nationalization would be a change of ownership but not of personnel. The new state-owned company, Petróleos de Venezuela, S.A. (PDVSA), was generally run on professional grounds. It was the holding company, overseeing a series of cohesive, operating subsidiaries.5



“IT IS A TRAP”

When Pérez left the presidency in 1979, the money was still flowing. But in the 1980s, the oil price plummeted and so did the nation’s revenues. Yet the edifice of the new petro-state was locked in place and indeed had expanded. Pérez was out of office during the 1980s, and the ills of the petro-state now became all too evident to him. As he traveled the world, he looked at different models for economic development and the struggle for reforms, and reflected on the costs and inefficiencies and defects of the overweening, oil-fed state. “An [oil] price spike is bad for everyone but worst for developing countries that have oil,” he had concluded. “It is a trap.”

By the end of the 1980s, Venezuela was the very paradigm for the petro-state. It was in deep crisis. Inflation and unemployment were rising rapidly, as was the share of the population below the poverty line. The widening income gap was evident in the massive emigration from the countryside to the cities and in the ever-expanding slums and shanty towns that climbed up on the hills surrounding the capital city of Caracas. Meanwhile, a substantial part of Venezuela’s current revenues was being diverted to meet interest payments due to international lenders.

All these pressures were made worse by one other factor—Venezuela’s rapid rise in population, which had, over two decades, almost doubled. Such an increase would have required heroic economic growth under any circumstances to keep per capita incomes constant. (Although sometimes overlooked, the growth in population was an indicator of social improvement—of better health and lower infant mortality.) To prevent explosive social protest, the government ran an ever more complex system of price controls that made the economy even more rigid. The price of almost everything was set by the government, right down to ice, funerals, and the price of a cup of coffee in a coffee shop.6

At the end of the 1980s, Pérez won a return term as president. By the time he moved back into the Miraflores, the presidential palace, in 1989, it was evident how severe the slippery “trap” of oil had become. Despite all the oil money, the economy was in terrible shape and getting worse. Per capita incomes were back to where they had been in 1973. In his inaugural address, Pérez had declared that he would administrate the nation’s wealth as though he were “administrating scarcity.” Determined to reverse course, Pérez immediately launched a program of reform, which included reducing controls on the economy, cutting back on spending, and strengthening the social safety net for the poor. After a very turbulent first year, marked by major riots in Caracas that left hundreds dead, the economy started to respond to the reforms and began to grow at high rates.

But undoing the petro-state was very difficult. The traditional political parties, interest groups, and those who benefited from the special distribution of rents united to oppose him and obstruct his program at every turn. Even his own party turned on him. The party activists were outraged that he had appointed technocrats to economic ministries, denying them access to the favors and rents to which they had become accustomed.

But those were not Pérez’s only opponents.



THE COUP

On the night of February 4, 1992, Pérez, just returned from a speech in Switzerland, was falling asleep in the presidential residence when he was awakened straight up by a phone call. A coup was in process. He raced to the Miraflores, the presidential palace, only to find it under attack. A group of ambitious young military officers had brought a long-planned conspiracy to a head and launched a coup against the state. The assault on the palace was coordinated with attacks elsewhere in Caracas and in other major cities.

A number of soldiers were killed in the bloody assault on the presidential palace. Pérez would have likely been killed too—he was certainly the prime target—save that he was spirited out of the building through a back door and hidden under a coat in the backseat of an unmarked car.

While the conspirators elsewhere in the nation achieved their objectives, those in Caracas were not able to capture the presidential palace. And they failed in one of their other most decisive objectives: to seize the broadcasting companies in order to announce their “victory”. But when a group of the rebels arrived at what they thought was one of the television stations, they discovered they had the wrong address; the station had moved three years earlier. Another group went to the right address of another station. But the station manager succeeded in persuading them that their videotape was the wrong format and that it would take some time to convert the tape to broadcast format—long enough, as it turned out, for the station to be recaptured by loyal forces. Before the night was out, it was evident that the coup had failed, at least in Caracas.

The next day, the leader of the Caracas part of the coup, the thirty-eightyear-old Lieutenant Colonel Hugo Chávez, now in custody, was put on national television, “impeccably dressed in uniform,” in order to deliver a twominute statement urging the rebels in other cities, who were still holding their targets, to surrender. The message was heeded. But Chávez’s two minutes on the airwaves did something more: they transformed him from a failed conspirator into an instant celebrity, a charismatic caudillo, very different from the maneuvering politicians of the traditional parties that the cynical public was accustomed to seeing. “Unfortunately, for now, the objectives we sought were not achieved in the capital city,” Chávez calmly told the other rebels—and the nation. “We will have new situations. The country definitely has to embark on the road to a better destiny.” The for now reverberated around the country.

At that particular moment, however, Chávez’s own road was leading to a prison cell.7



HUGO CHÁVEZ

Son of schoolteachers, Hugo Chávez Frías had grown up in the sparsely populated savannah region of Venezuela. As a youth, he had proved himself a formidable baseball player, with dreams of signing in the American major leagues. He was also a budding artist and caricaturist. But those were not his only interests. Two of his best friends in the city of Barinas were named Vladimir, in honor of Lenin, and Federico, in honor of Friedrich Engels, Karl Marx’s coauthor. During his teenage years, Chávez spent hours in the library of their father, a local communist, discussing Karl Marx and South America’s “Liberator” Simón Bolívar, and revolution and socialism. All this had a lasting impact, as evidenced by the book he carried with him on the day he entered the military academy as a cadet, The Diary of Che Guevara. And, already, as a new cadet, he was writing in his diary of his ambition that “one day I will be the one to bear responsibility of an entire Nation, the nation of the great Bolivar.” At the academy, he imbibed the careers of other ambitious young officers from modest circumstances—Ghaddaffi in Libya, Juan Velasco Alvarado in Peru—who had gone on to seize power.

It did not take Chávez long after graduating from the military academy to connect with other like-minded conspirators. “As far as anyone knows,” his biographers have written, “Hugo Chávez began to lead a double life when he was around twenty-three.” By day, he was a hardworking , dutiful, and obedient officer. At night, he was meeting secretly with other young officers, as well as extreme left-wing activists, plotting his way to power. One day, in the early 1980s, Chávez was out jogging with a group of other junior officers when they broached the idea that some of them, including Chávez, had been harboring for some time—that they secretly launch a revolutionary movement. And right there, in front of a tree much favored for its shade by Simón Bolívar, they took an oath to that effect. From that moment onward, Chávez saw himself as the future leader of Venezuela. He formed a clandestine officers’ group, the Bolivarian Revolutionary Army, that built its network in the army.8

It was in 1992, a decade or so after that jog, that Chávez and his coconspirators launched their failed coup. In the subsequent two years that followed his arrest, Chávez spent his time in prison reading, writing, debating, imagining his victory, receiving a continuing stream of visitors who would be important to his cause—and basking in his new glory as a national celebrity.

Later in 1992, a second coup attempt, this by more senior officers, also failed. But its very fact demonstrated how unpopular Carlos Andrés Pérez had become. Perez had alienated the public with his policies, especially the cutbacks in the spending that was the hallmark of the petro-state. He also continued to infuriate his opponents with his economic reforms and decentralization of political power. They got their revenge: they impeached him for corruption. The specific charge: he had provided $17 million to the new president of Nicaragua, Violeta Chamorro, who had taken over from the Marxist Sandinistas, and, fearing for her life, had asked for help in setting up a presidential security service to prevent her assassination. Here, with Pérez’s removal from office, was proof anew of the old maxim that no good deed goes unpunished.

Pérez’s opponents celebrated their victory in deposing him. But it would eventually prove a costly victory for these defenders of the old order of the petro-state. For the impeachment would further discredit the political system, ultimately leading to their own ruin.

On Palm Sunday, 1994, Rafael Caldera, Pérez’s successor and longtime rival, freed Chávez and the other plotters and provided an amnesty. It is possible that Caldera simply thought that these were young military officers led astray. There is also the possibility that Caldera acted out of a degree of personal sentiment. Hugo Chávez’s father had been a leader of Caldera’s old party in the state of Barinas and was the person who would have received him when he campaigned there. Curiously, Caldera did not add to the amnesty what might have been the normal restriction—permanently banning Chávez and the others from political life. It was a significant omission. But Caldera certainly never imagined that any of the plotters could ever navigate their way through Venezuelan electoral politics.

Now out of prison, the former conspirator, guided two seasoned politicians of the left, was determined to win political power not with bullets but at the ballot box. This time, instead of guns and conspiracy, Chávez’s weapons would be his new popularity, organization, unstoppable personal drive, and sheer charisma. He put himself at the head of what he called the Bolivarian political movement, and with endless energy, crisscrossed the country denouncing corruption, inequality, and social exclusion. He also traveled abroad. In Argentina, he spent time with a sociologist who propounded a theory of the mystical union of the “masses and the charismatic leader”—and also denied the Holocaust.9

But his most important trip was to Cuba, where he forged a deep bond with one of his heroes and another baseball fanatic, Fidel Castro. Castro would be his mentor, and indeed embrace him as his political son. For his part, Chávez would come to see himself as Castro’s legatee in the Hemisphere, but different in one crucial aspect—a Castro who would be bolstered with tens of billions of dollars of oil revenues.



LA APERTURA

Meanwhile, things had gotten worse for Venezuela’s economy, leading to a severe banking crisis. By the middle 1990s, it was clear that Venezuela urgently needed to increase its oil revenues to cope with the country’s problems. Since world petroleum prices were not going up, the only way to raise additional revenues was to increase the number of barrels that Venezuela produced. The new president of PDVSA, a petroleum engineer named Luis Giusti, embarked on a campaign to rapidly step up investment and output.

The most significant initiative, and one with global impact, was la apertura—“the opening” (really, a reopening )—inviting international oil companies to return to Venezuela to invest in partnership with PDVSA, to produce the more expensive and technologically challenging reserves. This was not a winding back of nationalization, but rather reflected the trend toward greater openness in the new era of globalization. It was also a pragmatic effort to mobilize very large-scale investment that the state could not shoulder by itself.

La apertura was highly controversial. To some it was anathema, heresy. After all, the traditional route that had been followed—nationalization, state control, expulsion of the “foreigner”—was enormously popular. But to Giusti, this was all ideology. What mattered was not appearances and symbolism, but revenues and results. The state did not have the resources to fund the full range of required investment, and social programs were a huge competing call on the government’s money. Moreover, despite its competence, PDVSA did not have the advanced technology that was needed. La apertura would bring in international capital and technology. Output would increase from older fields. And, at last, Venezuela would be able to use technology and large-scale investment to liberate the huge reserves of very heavy oil in what is called the Faja, the Orinoco region, that up to then could not be economically produced. “The Orinoco was dormant,” said Giusti. “ We had known for one hundred years that the oil was there, but nothing had been done.”

With la apertura, Venezuela might be able to double its production capacity by five million barrels over six or seven years, and the state would capture the lion’s share of the additional revenues through taxation and partnership. But none of this could be accomplished without foreign investment. As Giusti summed it up, “There was only so much money, and we had so much to do.”10



PAINTING THE PICTURE

The hardest part was the politics, starting with President Rafael Caldera. Giusti had to convince the president, who knew the nationalistic politics all too well. Giusti had the detailed plan for la apertura printed in two handsomely bound volumes, with blue covers and gold letters. At a meeting with the president, he saw that Caldera had put paper clips on many, many pages. This sent Giusti into something of a panic. He knew that Caldera was a very skilled lawyer and that he would lose if he got into a detailed legal discussion with the president.

How was he going to persuade the president to reverse what was one of the most fundamental and popular principles of national politics and public opinion? Somehow he had to get to the essence; he had to paint the whole picture for Caldera. Then he had an idea. Why not actually paint a picture ? He knew a brilliant geologist who was also a talented landscape painter, Tito Boesi. On a Thursday, Giusti called Boesi and said that he wanted the geologist to paint a large canvas mural that would depict every stage of the country’s oil technology development, from the seepages that had enticed the original explorers to the application of the various generations of technologies, up to what might be imagined for the future of the Orinoco. The purpose would be to vividly demonstrate how increasingly complicated and expensive would be the further development of Venezuela’s petroleum patrimony.

Giusti told Boesi that he needed the painting right away.

“Are you crazy?” said Boesi.

“I need it,” insisted Giusti. “I know you’re a very good artist, Tito. But it doesn’t have to be a masterpiece.”

Summoned to the president’s house the following Saturday, Giusti appeared with Boesi’s canvas painting rolled up under his arm. When called upon, he asked the president if he could show him something. To the perplexed look of many in the room, including the president, he rolled out the canvas on the long conference table and explained its story.

When Giusti finished, he could see that President Caldera was angry. At first he thought it was directed at him, but then realized that Caldera was angry with his own entourage, which, the president had concluded, had not properly briefed him on the scale of the challenge facing the industry on which Venezuela depended.

Several days later, the president approved la apertura. Over the next few years, as the contracts were negotiated and implemented, la apertura would bring tens of billions of dollars of international investment into the country, jump-starting the development of the vast oil sands, the Faja, and “reactivating” older oil fields, which needed injection of new technologies to reverse their decline.11



THE OIL WAR

There was a second very important aspect to oil policy as well. Venezuela would produce at its maximum rate, irrespective of OPEC output quotas, indeed disregarding the country’s quota. Venezuela argued that its quota had been set a decade earlier and did not reflect changes in its population and social needs. Of course, other OPEC countries, wanting to maximize their own output, vehemently disagreed. Between 1992 and 1998, Venezuela increased its oil production by an astonishing 40 percent. That engendered an acrimonious battle within OPEC. Observers began to write about an “oil war” for market share between the two countries that had taken the lead in founding OPEC—Venezuela, now ignoring quotas; and Saudi Arabia, insisting that they be observed. That was the battle that culminated in the November 1997 Jakarta meeting and was resolved with the agreement that all exporters could produce flat out, which by now they were all more or less already doing. 12

But by then the Asian financial crisis had already begun to trigger an oil price collapse, ravaging the budgets of the oil-exporting countries. At this point, Venezuela recognized that it could no longer afford its market share strategy. In March 1998 Venezuela, Saudi Arabia, and non-OPEC Mexico met in Riyadh and worked out a set of production cutbacks for exporters, OPEC and non-OPEC alike. Most of the other exporters went along, out of self-interest and sheer panic. But it was not enough to deal with the drop in demond from the Asian crises. Then the oil prices, after a brief recovery, fell to $10 and then further to something that, for the exporters, was intolerable—single digits.



THE ELECTION : NOT EVEN “THE REMOTEST CHANCE”

By late 1998 Venezuela was deep into an economic crisis, poverty was rising rapidly, and social tensions were high—and mounting. “Economically, Venezuela is reeling, with oil prices under $10 a barrel,” reported the New York Times in December 1998. It was just at this moment that Venezuela was going to the polls to elect a new president. The two dominant parties, Acción Democrática and Copei, were thoroughly discredited. They were also depleted; they seemed to have run out of ideas, energy, and conviction. For a time, the presidential frontrunner was a mayor best known for having once been Miss Universe, but she faded as the campaign progressed.13

Chávez, unrelenting in his attacks on the political system, had risen from a few percentage points to the top of the polls. As was customary during a presidential election campaign, PDVSA provided briefings to the candidates. By this point, Giusti himself had become a controversial figure because of his championing of la apertura and wide-open production, and because he was seen by some as pursuing his own political agenda. When Chávez arrived at PDVSA’s headquarters, he told Luis Giusti he wanted his briefing to be one-on-one, with each just having one aide there. For ninety minutes, Giusti took him through the industry’s situation. At the end, Chávez thanked him for an excellent presentation and then, just before they went through the door, grabbed him by the arm and warmly added that he wanted to express his appreciation and personal affection. Chávez then went downstairs to the waiting press; he announced that as soon as he was elected president, he was going to fire Giusti.

In the December 1998 presidential election, with just a 35 percent turnout, the deep economic and social distress that came with the oil price collapse gave Hugo Chávez, who had been released from prison only four years earlier, a 56 percent victory. In his victory speech that night, Chávez denounced Luis Giusti as the devil who had sold the soul of Venezuela to the imperialists.

The next month, standing next to Chávez at the inauguration, was the outgoing president, Rafael Caldera, who had amnestied the lieutenant colonel in 1994. Caldera looked nothing so much as stunned. “Nobody thought that Mr. Chávez had even the remotest chance of becoming president of the republic,” he later said. As for Luis Giusti, he made a point to resign as president of PDVSA before Chávez could fire him. 14



CHÁVEZ IN POWER

But how would the forty-two-year-old lieutenant colonel govern? Was he a democrat or an authoritarian? His initial comments were not clear: “If you try to assess me by traditional canons of analysis, you’ll never emerge from the confusion,” he said. “If you are attempting to determine whether Chávez is of the left, right or center, if he’s a socialist, Communist or capitalist, well, I am none of those, but I have a bit of all of those.” At another time he added, “I absolutely refuse, and will refuse to my grave, to let myself be labeled or boxed in. I cannot accept the notion that politics or ideology are geometric. To me, right and left are relative terms. I am inclusive, and my thinking has a little bit of everything.”

Whatever the ideology, Chávez moved swiftly to consolidate all power in his hands, keeping the formal institutions of the state—“worm-eaten” though he called them—but depriving them of any independent role. He quickly pushed through a new constitution, which eliminated the upper house of the congress. He turned the remaining chamber into a rubber stamp. He increased the number of Supreme Court judges from twenty to thirty-two, packing it with revolucionistas. He took direct control of the National Electoral Council, ensuring that his personal political machine would count the ballots in future elections. He removed any congressional oversight of the army and then proceeded to set up a second parallel military force of urban reservists. And he rechristened Venezuela as the Bolivarian Republic.

He made a triumphant visit to Cuba, where he declared, “Venezuela is traveling toward the same sea as the Cuban people, a sea of happiness and real social justice and peace.” He also played ball with Fidel Castro—in this case, baseball. Although Chávez did the pitching for the Venezuelan team, the Cubans won, 5-4. The Cubans won something else as well—a Venezuelan subsidy. With the end of Soviet communism, Russia no longer had any ideological bonds with Cuba and had stopped providing cheap oil. Chávez stepped in to become Castro’s oil banker, delivering petroleum at a steep discount.15

In turn, Cuba provided advisers of many different kinds—health workers, teachers, gymnastic instructors, and a wide variety of security personnel operating under various guises. For Cuba, this was a return to Venezuela, for it had provided aid to guerrillas during the “violent years” of the 1960s. Castro had relished Venezuela’s oil wealth, and he had repeatedly tried to open a beachhead. Indeed, one attempt to insert Cuban military into Venezuela in 1967 had led to the death of Castro’s personal chief of security. This time, however, Cuba was there to bolster the government—Chávez’s government. Chávez also adopted the Cuban system of local neighborhood control. And in case it was still not clear where he stood, Chávez clarified matters. “There is only revolution and counterrevolution,” he declared, “and we are going to annihilate the counterrevolution.” When Roman Catholic bishops urged him to be less confrontational, he dismissed them as “devils in vestments.”16

Castro was a role model in many ways. As the Cuban president specialized in speeches that went five or six hours, Chávez adopted a variant with his Sundayafternoon television broadcast, Alo Presidente. Over the course of four hours or more, in a weekly demonstration of his manic energy, he would joke, sing revolutionary songs, tell anecdotes from his boyhood, and talk about baseball. He would also denounce his opponents as the corruptos and position himself as the leader of the revolutionary vanguard opposing the United States or what he calls the “North American empire . . . the biggest menace on our planet.” At one and the same time, he wrapped himself in the cloak of the nineteenth-century liberator Simón Bolívar and propounded his new theory of “socialism for the twenty-first century.”

And then there was oil, the soul of the Venezuelan state. The economic engine was PDVSA and Chávez quickly asserted his control. He was much influenced by a German-born energy economist, Bernard Mommer, who made the case for a highly nationalistic oil policy and argued that Venezuela had fallen prey to “liberal policies” that urgently needed to be reversed. Chavez attacked PDVSA as “a state within a state” and then proceeded to subordinate it to his state, politicizing what had been the professionally run company. PDVSA’s treasury became the cash box of the state, and Chávez moved financial control of the company into the central government, giving him direct control over its vast revenues. There was no accountability or transparency. He could use the money as he wanted, shifting investment from the oil industry to whatever purposes he thought best, whether social spending and subsidies for favored groups at home or pursuit of his political objectives within the country and abroad. More than ever before, Venezuela was truly a petro-state. 17



THE RECOVERY OF OIL

Chávez made a decisive policy change that would reverberate throughout the world. Venezuela would no longer pursue a strategy of increasing revenues by increasing outputs. Indeed, it now became the strongest advocate in OPEC for cutting back on production and observing quotas.

As prices started to recover, Chávez left no doubt of his explanation: “The increase in the oil price has not been the result of a war or the full moon,” he said. “No. It is the result of an agreed strategy, a change of 180 degrees in the policy of previous governments and of Petróleos de Venezuela . . . Now the world knows that there is a serious government in Venezuela.”18

Chávez had moved OPEC to the center of Venezuelan oil policy, but in fact, Venezuela had already started to cut back on production before Chávez was elected, beginning in Riyadh in March 1998. Also, Venezuela was one element in a larger tableau. For faced with plummeting revenues, all the OPEC countries—and some non-OPEC countries—had gotten religion about quotas and restraint.

Moreover, the overall picture was certainly changing. While OPEC was reining in production, Asia started to recover. Demand started to snap back. And so did prices. This particular oil crisis—the crisis of the producers—was ending.

The exporters, who before had been dismally staring at $10 a barrel or less, were now talking more confidently about a $22-to-$28 “price band” as their target. But by the autumn of 2000, spurred by economic recovery in Asia and OPEC’s new policy, the price of oil had surged over the band, above $30 a barrel, a threefold-plus increase from where it had been just two years previously. The big increase in demand—a surge of 2.5 million barrels per day between 1998 and 2000—was having a decided impact on the oil market.

The “soaring oil prices,” as they were described in the press, were setting off alarms in consuming countries, which had rather quickly become accustomed to lower prices. Now they feared a “brewing energy crisis.” Such was the alarm that the rising prices—and the gasoline and home-heating oil prices they drove—were becoming a contentious issue in the hotly contested 2000 U.S. presidential battle between George W. Bush and Al Gore. On September 22, 2000, two days after prices spiked to what seemed a shocking $37 a barrel and in the midst if the campaign, the Clinton administration released some oil from the Strategic Petroleum Reserve, aimed at blunting price increases in the weeks before the arrival of winter.19



By that point Hugo Chávez had already established himself as a force in world oil and in the Western Hemisphere. Yet without the oil price collapse of 1997–98, it is not at all clear that he would have had the chance, just seven years after his coup attempt had landed him in jail, to act on what he had written in his diary decades before, while a cadet in the military academy, and take “responsibility” for Venezuela. Now, like the dictator General Cipriano Castro a century earlier, he aimed for his Bolivarian project to extend beyond Venezuela’s borders, to the rest of Latin America. But unlike that general, he was seeking global reach as well. And the rising price of oil would give him the wherewithal to try.


6

AGGREGATE DISRUPTION

As the twenty-first century opened, except for the brief price spike, oil had faded away as a policy issue. Moreover, the resolution of the 1990–91 Gulf crisis appeared to have taken energy security off the table.

Instead attention was riveted on new things and in particular on “new new things.” That meant the revolution in information technology and in how people communicated with one another in a world that was now continually interconnected twenty-four hours a day. And it meant, more than anything else, the Internet. Silicon Valley and cyberspace—those were the places to be. All this, along with the end of the Cold War and rapidly growing world trade, inaugurated a new self-confident era of globalization. “Distance” was disappearing, along with borders, as both finance and supply chains tied production and commerce together around the planet. It was an increasingly open world, freely communicating, freely trading, freely traveling—and, as it turned out very definitely, “visa-lite.” It was a world of rising living standards and ever-wider possibilities. It was an optimistic time.



THE DAY THAT CHANGED EVERYTHING

On September 11, 2001, two jets hijacked by Al Qaeda operatives slammed into the twin towers of the World Trade Center, and a third into the Pentagon. The fourth, aimed at the Capitol, was brought down by passengers in a cornfield in Pennsylvania. For the first time since the Japanese air raid on Pearl Harbor, December 7, 1941, which had taken the United States into World War II, America had been directly attacked, and with a greater loss than on that unsuspecting Sunday morning in Hawaii.

In retrospect, the warnings had been there with a series of attacks—initially on the World Trade Center in 1993; then on the embassies in Kenya and Tanzania in 1998, where hundreds perished; and on the U.S. destroyer Cole in a port in Yemen in 2000—along with an attempt to blow up Los Angeles International Airport on New Year’s Eve 2000 that had been aborted by an alert guard at the Canadian border. And there were also all the pieces of intelligence that were not connected—ranging from the CIA and FBI databases that did not talk to each other, to the Arab students at flying schools in the United States who were interested in learning only how to take off but not how to land.

That morning transformed international relations. Security now became the central preoccupation. Borders and barriers went up. The world was no longer so open a place. In the autumn of 2001, in what became known as the “war on terror,” the United States and its allies counterattacked in Afghanistan, the base from which Al Qaeda operated. They pushed the Taliban, Al Qaeda’s ally, from power, and in just a matter of weeks achieved a decisive victory. Or so it seemed at the time.

Globalization suddenly looked different. The world might be much more interconnected, but new vulnerabilities arose out of the much-denser network of trade and communication lines on which this interconnected world relied. “Homeland security” went from being a title for think-tank reports to the name of a massive new U.S. cabinet agency. September 11 revealed a dark underside to globalization. Empowered with the tools of globalization, shadowy groups with militant ideologies could take advantage of the openness—easy travel, easy movement, cheap cellular communication, and easy Internet access—to disrupt globalization and seek to undermine the more open world.

Petroleum had, since the beginning of the twentieth century, been entwined with security and the power and position of nations. But 9/11 led to a new emphasis on oil’s risks, including the fact that the world’s biggest oil region, the Middle East, was also the region from which Al Qaeda had emerged. One of Al Qaeda’s original grievances, in addition to the impact of modernity on the region, was the presence in Saudi Arabia of U.S. troops, which had remained after the 1991 Gulf War to help contain Saddam Hussein. The militant messages and sermons in some of the Mideast mosques were very similar to those of Al Qaeda, and recruits and money came from that region. Some fifteen of the nineteen suicide hijackers on 9/11 had been Saudi Arabian nationals.

The “special relationship” between the United States and Saudi Arabia went back to the meeting between President Franklin Roosevelt and King Ibn Saud on the Great Bitter Lake, in the Suez Canal, in February 1945. From Harry Truman onward, U.S. presidents had made the security of Middle East, and in particular Saudi Arabia and its oil, a fundamental national interest. Jimmy Carter made the commitment much more explicit in response to the Christmas Eve 1979 Soviet invasion of Afghanistan, which was seen as a possible “stepping stone” for the Soviet Union to try to gain control over the Persian Gulf and “much of the world’s oil supplies.”

“An attempt by any outside force to gain control of the Persian Gulf region,” said the Carter Doctrine, “will be regarded as an assault on the vital interests of the United States, and such an assault will be repelled by any means necessary, including military force.” Saudi Arabia, in turn, had tied its long-term security to the United States. There were many other ties as well. During the late 1970s, the Saudi cabinet was said to have more members with American Ph.D.s. than the U.S. cabinet.1

The Carter Doctrine was pointedly directed at an “outside force,” the Soviet Union. But what about “inside forces” within the Gulf region? Here, with the attack of September 11, was evidence that some part of the population in Arab countries was outrightly, indeed violently, hostile to the United States and the rest of the industrial world. No one knew the actual proportions. Yet in the aftermath of 9/11, some in Saudi Arabia initially denied that fifteen of the hijackers were even Saudi. This added to the tension between the United States and Saudi Arabia that strained the energy and security relationship. The rift did not fully end until May 2003, when Al Qaeda–linked operatives launched terrorist attacks in the Saudi capital of Riyadh, followed within a year by other attacks, including one on a police headquarters in the capital city. Saudi Arabia recognized that it was a prime target and that Al Qaeda was its dangerous enemy.

From an energy perspective, the lasting impact of 9/11 in the United States was a renewed conviction that oil consumption and oil imports in particular were a security risk. At the time, Mideast oil represented about 23 percent of imports and 14 percent of total U.S. oil consumption. But it had become symbolic of “dependence” and the dangers thereof. Many Americans thought that all U.S. imports came from the Mideast. And thus the mantra of “energy independence,” which had been a fixture of American politics since the 1973 oil embargo, took on new urgency.

September 11 itself did not have much impact on oil price. (In the months that immediately followed, oil prices actually fell below $20 a barrel and did not get back over $20 until March 2002.) Even into 2004, the widespread expectation was that market conditions would ensure that prices remain in that “moderate” range. Yet over four years, between 2004 and 2008, prices would shoot up, reaching a historic high of $147.27, with far-reaching impact on the world economy. They would redistribute global economic and political power, and shake people’s confidence and raise anxiety about the future. The extraordinary increase both reemphasized the centrality of oil and at the same time gave new impetus to move beyond oil.

As with most great developments in human affairs, there is not a single explanation for the massive leap in prices. It was driven first by supply and demand, and huge but largely unanticipated change in the world economy. Disruptions and a return to resource nationalism were critical elements. But then more and more momentum was provided by forces and innovations coming out of the financial market. The story of what happened to price is also a narrative about profound changes both in the oil industry and in the wider world.

September 11 disrupted security and international affairs and altered thinking about oil and dependence and the uses that could be made with oil revenues. But 9/11 did not interrupt supply. In the autumn of 2002, more than a year after 9/11, there was little hint that supply problems would begin to take a toll on the flow of oil. Indeed, anything but. “Oil Prices Fall as Global Supplies Soar” headlined an industry trade publication. But that would very shortly change .2

A series of crises in three major exporting countries would spur supply losses, compounded by the the forces of Mother Nature. None was large enough on its own to upset the balance in the oil market. Yet when tallied together, they would constitute a significant loss of supply, what added up to an “aggregate disruption” that would have notable impacts over the next half decade, reducing supplies that would have otherwise been available to a growing world economy.



“ALO PRESIDENTE”—VENEZUELA

Reelected president of Venezuela in 2000, Hugo Chávez moved to further consolidate power in his hands. As he did so, opposition became more vocal. Parents protested the Ministry of Education’s plans to revise history textbooks in a way that would demonize Venezuela’s first forty years of democracy—“Cubanizing” the textbooks, it was said. In the face of parental opposition, the government retreated, temporarily. The government also established local militias called Bolivarian circles, modeled on Cuba’s Committees for the Defense of the Revolution, in order, as Chávez announced, to create “a great human network” to defend the revolution. New controls on the media included a ruling that the press could be punished for spreading “false news” or “half truths.” But particularly alarming was a package of 49 laws that greatly extended state power and that was put into effect without approval by the National Assembly. At the same time, Chávez extended his control over Petróleos de Venezuela—PDVSA—the state oil company. The continuing politicization of PDVSA was eroding the effectiveness and professionalism for which the company had developed a worldwide reputation.

By this time, a broad coalition of opposition had emerged, encompassing both trade unions and business groups, as well as the Catholic Church. Segments of the senior military leadership were becoming wary of the way in which Chávez was taking power into his own hands and the way he was wielding it. On April 7, 2002, Chávez used his Sunday television talk show, Alo Presidente, to fire seven members of the board of PSVSA. He ridiculed each by name and then dismissed them one by one to the cheers of the studio audience.3

Four days later, on April 11, 2002, opposition to Chávez and popular discontent exploded into a mass march of upwards of a million people in Caracas. As the march approached Miraflores, the presidential palace, guards loyal to Chávez started shooting, killing, and wounding some of those at the forefront of the crowd. Chávez went on television to denounce the marchers. But a split scene on the screen simultaneously showed the carnage in front of the presidential palace while Chávez orated, further inflaming the outrage.



“CALL FIDEL!”

As tension mounted, Chávez ordered the implementation of Plan Ávila, what has been described as “a highly repressive security operation.” Military units began to rebel against both the plan and the idea that soldiers would turn their guns on civilians. At 3:25 a.m., on April 12, 2002, the nation’s top military officer went on television. In light of the “appalling incidents that occurred yesterday in the nation’s capital,” he said, “the president of the republic has been asked to resign, and he has agreed to do so.” By this time Chávez had been taken into custody and was being hustled from one military base and then to another. At one point he managed to borrow a cell phone from a soldier, and reaching one of his daughters, asked her to “call Fidel . . . Tell him I haven’t resigned.” Over the next several hours, various resignation letters were presented to Chávez and negotiated over, but he never quite signed any of them.4

Although described as a coup, what had ensued was not expected or planned, and the opposition scrambled to fill the sudden power vacuum. A prominent business figure emerged as head of a provisional civilian-military government. He proceeded to make what proved to be a fundamental mistake by dissolving the government but failing to announce that elections would be held soon, thus losing the mantle of constitutionalism—alienating the military, in particular. And there was still no resignation letter signed by Chávez.

Chávez had been moved to the military island of La Orchila, from whence it was thought he was going to be flown out of the country, probably to exile in Cuba. But, on the mainland, confusion and fissure started to appear among the opposition, suddenly thrust into power. The military began to waver and split. Finally, in the very early morning hours of April 14, Chávez had apparently agreed to a final document that embodied his resignation. However, a couple of hours earlier, a general, one of the original members of Chávez’s group of conspirators, had already dispatched helicopters carrying commandos to La Orchila. While the letter was going through retyping, the helicopters touched down on the island, where they picked up Chávez. He was not going to Cuba after all. Instead, he headed back to the presidential palace in Caracas.5

Less than three days after his arrest, Hugo Chávez was once again in control of the country, and he set out to quickly tighten his grip. That included further extending his direct control over the management of PDVSA, the engine of the economy and by far the largest source of government revenues. The months that followed were turbulent, for Chávez showed no interest in reconciliation. The country was deeply divided, and the opposition was very restive.



THE GENERAL STRIKE

Later in 2002, with the normal channels of political opposition closed in what was increasingly becoming a one-party governmental system, the unions and business community joined together to call a general strike in order to try to force Chávez into a referendum on his governance.

Much of the country shut down. PDVSA just stopped working. Over the next few weeks, the country’s oil output plummeted from 3.1 million barrels a day to around 200,000 barrels a day—perhaps even less. Venezuela was forced to import gasoline on an emergency basis. The loss of almost three million barrels a day shifted the world market from surplus to shortage. Oil prices, which had been declining, started to rise sharply again and soon were higher than any prices seen since the Gulf crisis in 1990.

In Washington, the disruption ignited a sharp debate within the U.S. government as to whether to release oil stored in the U.S. Strategic Petroleum Reserve to compensate for the oil lost from one of America’s biggest suppliers. The Department of Energy recommended use of the SPR. But the final decision was not to do so. The oil in the strategic reserve needed to be retained, it was said, for the possibility of a much greater disruption that could occur somewhere else—in the Middle East.

Meanwhile in Caracas, Chávez would not budge: as the weeks went on, the general strike eroded; people drifted back to work and after sixty-three days, the strike ended altogether. By mid-February 2003, PDVSA was back up to about half its prestrike level. In the aftermath of the shutdown, Chávez was now even more intent on eliminating any political opposition to his march toward his “socialism for the twenty-first century.” He was determined to end whatever independence PDVSA still had left. About twenty thousand workers—almost half the workforce—were summarily fired and replaced with less-experienced workers; from then on, the company would be operated not as a state-owned company, but as an arm of the state. The vast amounts of money that the company generated would become inseparable from the state.

The crisis of production was over. But due to the haphazard way in which production was shut down, and the inexperience of many new managers brought on after Chávez’s purge, Venezuela would not regain its prestrike levels of output, let alone approach what had been its ambitious expansion goals. Still by mid-April 2003, enough oil was being produced and refined that Venezuela could once again start exporting petroleum to its customers. But by then supply was being disrupted elsewhere on the world market.



NIGERIA: “YOU’RE A PETRO-STATE”

Nigeria, the eighth-largest exporter in OPEC and one of the major sources of U.S. petroleum imports, certainly has the attributes of a petro-state. Oil and natural gas account for 40 percent of GDP.

As finance minister from 2003 to 2006, Ngozi Okonjo-Iweala sought to set the budget based on a lower oil price assumption, impose fiscal discipline, and build up the government’s financial reserves. All that made her highly unpopular—and a political liability. “The pressures were enormous, which is part of the reason I’m not there today,” she later recalled. “Politicians were not happy with me. I was quite controversial for maintaining discipline. I’m sure that on the day I resigned there were more than a few high-fives.”6



ETHNIC CONFLICT

But oil is only part of the picture. Nigeria is a dominant force in Africa. With 155 million people, it is the most populous country on the continent; one out of every seven Africans is Nigerian. But many of them do not think of themselves as Nigerian but rather define themselves by language, religion, and tribal group.

Nigeria is a country of 250 ethnic groups, split among an Islamic north and a Christian south, with further divisions between east and west in the southern part. It was defined as a unit by the British colonial administration, but is a nation tied together with weak institutions and a weak sense of national unity, and divided by strong religious and ethnic identities. Nigeria became independent in 1960, four years after the discovery of oil there. Its history has thereafter been defined by violent conflict over the distribution of power and resources and over the state itself. In 1967 the southeastern part tried to secede and become a separate nation of Biafra. After three years of civil war, and the loss of more than three million lives, the north won, and the country stayed whole.

Nigeria has gone through five constitutions and seven military coups. The country’s experience demonstrates the Dutch disease in many ways. The once-vibrant agricultural-export sector has collapsed, and the country is a net importer of food. An effective and dedicated civil service, one of the legacies of colonial rule, was weakened, contributing to the poor governance. Oil revenues were stolen and squandered on a massive scale. The huge Ajaokouta steel complex is the poster child for revenues wasted. Built in the 1970s, it has yet to produce commercial steel. Between 1970 and 2000, Nigeria’s population more than doubled; over the same period, on a per capita basis, income actually declined .7

Through all this, the country’s oil industry has been caught up in the struggle among regions, ethnic groups, national and local politicians, and violent groups—militias, gangs, and cults—for power and primacy, for identity—and for the money. The Nigerian government takes over 80 percent of the sale price of a barrel, but there is a constant battle over how those earnings should be split between the federal government, the states, and local communities.

But that is only part of the battle. Violent clashes between Christians and Muslims, including massacres in which hundreds are slain, are a recurrent feature. So is the struggle over the application of Islamic sharia law in the north. Corruption is deeply embedded throughout the fabric of national life.

The epitome of state failure was the brutal dictatorship of General Sani Abacha, who seized power in 1993. In the five years prior to his sudden death, he proved himself a champion at corruption; it is thought that he amassed as much as $5 billion. Most notoriously, in 1995 he oversaw the brutal execution of Ken Saro-Wiwa, author and environmental campaigner for the Ogoni people, and eight other Ogoni activists. His death resounded for years ofter. Abacha himself died three years later. Over the next several years, Nigeria struggled to recover some of the stolen money. Abacha’s family stubbornly maintained that the money had been honestly gained, insisting that Abacha, in addition to being Nigeria’s full-time dictator, had also been a very astute investor.8

In 1999, in the first election in sixteen years, Olusegun Obasanjo, a former general, was elected president. Obasanjo had earned a unique position in Nigerian annals, for during a previous spell in power, he proved to be the only military ruler in Nigeria’s history to hand over power to a constitutionally elected civilian government. Prior to his return as an elected president, he served as chairman of the advisory board of Transparency International, a prominent NGO that focuses on combating corruption in developing countries. It was not an inappropriate preparation: when he returned to power as a civilian and as an elected president in 1999, corruption was one of the most intractable problems.



VIOLENCE IN THE DELTA

And nowhere was it more intractable than in the Niger Delta. The Delta is a vast, swampy region formed by the Niger River, Africa’s largest, as it flows into the Gulf of Guinea. The Delta is where most of Nigeria’s oil is produced, and where regional and local politicians have habitually siphoned off a great deal of wealth for their own bank accounts, and which is why a governorship of one of the Delta states is a much-sought-after position: it is a ticket to wealth.

Officially, however, only 13 percent of total oil revenues accrue to the local states. The Delta’s decrepit infrastructure and endemic poverty, combined with the high population density, fueled hostility both toward the oil industry, which had no say over how the oil money was allocated between the federal and state governments, and the regional and national governments. There was also a legacy of environmental degradation from oil production of the 1960s and 1970s.

The Delta had been subject to recurrent outbreaks of violence. With an estimated forty ethnic groups in the area, there was plenty of tinder for conflict. But the violence became more organized and more lethal in the first decade of this century. “Bunkering”—stealing oil from the maze of pipelines and flow stations that carry the oil to barges and on to the world market—turned into a very profitable business, and an increasingly violent one. Bands of young men began to attack the flow stations, drilling sites, and oil camps to extract money and pressure companies and local governments. They formed gangs under names like the Bakassi Boys, the Icelanders, the Greenlanders, and the Niger Delta’s People’s Volunteer forces; and they waged war with rival gangs, fueled by drugs, alcohol, demonic initiations, and occult superstitions.

In the run-up to elections in 2003, as had become the custom, local politicians patronized various armed groups to violently promote their victories and steal oil as a way to raise campaign funds. In March 2003, gangs attacked a series of production sites in the Delta. The oil companies evacuated their personnel, and more than a third of Nigeria’s production—over 800,000 barrels a day—was shut down.

After the 2003 elections, the militias, operating independently, began to acquire more weapons and build themselves into more formidable forces. They stole increasing amounts of oil—sometimes estimated at over 10 percent of Nigeria’s total production (which in 2010 would amount to over $5 billion stolen oil)—in collaboration with former oil workers, corrupt government officials, an international network of oil smugglers, and pirates operating widely in the Gulf of Guinea. Stealing and sabotage were largely responsible for the oil spills that despoiled the Delta. Violence was already so endemic and at such a level that by the end of 2003, an internal report for one of the major oil companies said that “a lucrative political economy of war in the region is worsening” and warned of “increasing criminalization of the Niger Delta conflict.”

The funds from the bunkering, in turn, enabled the militia leaders to further increase their arsenals and acquire much more lethal weapons and, in the words of one observer, “take militia activity to a new dimension of criminality.” As the head of one of the most notorious militias put it, “We are very close to the international waters and it’s very easy to get weapons.”

The wells and gathering systems are strung out through the swampland, mangrove forests, and shallow waters of the Delta, crisscrossed by creeks and streams—all of which provides for good cover and quick getaways on speedboats mounted with machine guns. The region is very densely populated, the birth rate is very high, and poverty is widespread. The inequities breed anger and resentment, on which the militias feed.

In September 2004 a leader of one of the gangs, a self-described admirer of Osama bin Laden and an advocate that the Ijaw ethnic group should secede and form its own country, threatened “all-out war” against the Nigerian state. That threat “pushed oil over $50 per barrel for the first time.”9

That was it for President Obasanjo. He summoned the leaders of two of the most violent groups to the federal capital of Abuja, where he met with them in the cabinet room and hammered out a peace accord. It lasted through part of 2005. But then the Delta began to descend back into violence and gang warfare.



“THE BOYS”

In January 2006, four foreign oil workers were kidnapped from a platform in the shallow waters of the Niger Delta, and then gunmen aboard speedboats attacked another oil facility in the Delta, killing 22 people, setting buildings afire, and severely damaging the equipment for managing the flow of oil.

A heretofore unknown group took credit—the Movement for the Emancipation of the Niger Delta. MEND, as it became known, declared that it sought “control of resources to improve the lives of our people.” Claiming several thousand men under arms, MEND warned that it would unleash further attacks that would “set Nigeria back 15 years and cause incalculable losses,” and said it aimed “to totally destroy the capacity of the Nigerian government to export oil.” 10

A few days after the January 2006 attacks, in the snow-covered Swiss Alpine village of Davos, at the World Economic Forum, Olusegun Obasanjo, Nigeria’s president, was meeting in a seminar room to discuss his country’s economic prospects. Two of the participants, a venture capitalist from Silicon Valley and a world-famous entrepreneur from Britain, urged Obasanjo to get off oil and emulate Brazil and launch large-scale cultivation of sugarcane to make ethanol. A bemused Obasanjo, president of one of the world’s major oil producers, nodded with feigned enthusiasm and promised to give the idea serious consideration.

Toward the end of the meeting, as Obasanjo was about to leave, he was asked about the those recent attacks a few days earlier in Nigeria and whether they presaged a new wave of violence.

It was nothing to get too concerned about, he said with confidence. “The Boys,” as he called them, would be brought under control.

That was not an unreasonable expectation. After all, some of the militia and vigilante groups, including the Bakassi Boys, had been subdued over the previous few years. Moreover, it was difficult to distinguish among all those who attacked the oil industry infrastructure. They all operated with the same kind of tools—those fast speedboats, sometimes with machine guns mounted on them, AK-47s, and stolen dynamite. The picture was further complicated by the shadowy connections between those in speedboats and those in power.

But this time, “the Boys” did not cooperate. The January 2006 attacks were the beginning of a wave of bloody intimidation, kidnappings, and murder. Violence in Nigeria became a key factor in the world oil market. “The balance of world oil supply and demand has become so precarious,” U.S. Federal Reserve chairman Alan Greenspan warned in June 2006, “that even small acts of sabotage or local insurrections have a significant impact on prices.” The dense swamps and intricate network of creeks and waterways made it easy for MEND and such similar organizations as the Martyrs Brigade to attack and then fade back into the jungle—and they did so with impunity. One night shortly after the presidential election in 2007 of Nigeria, the family home in the Delta of Goodluck Jonathan, the new vice president (and now Nigeria’s president), was burned to the ground by one of the gangs. It was meant as a demonstration of power—and as a warning.11

In the face of constant violence in the Delta and the killing and kidnapping of their workers, the international oil companies repeatedly evacuated their employees, closed down facilities, and declared force majeure on shipments. Plans for substantial expansion of capacity were shelved. As it was, without physical security, the oil could not flow. At some points, upward of one million barrels per day—40 percent of Nigeria’s total output—was shut in and lost to the world market. That deficit was one of the key factors in the rise of prices. And it was certainly a loss for the United States, for which Nigeria had just moved up in the rankings to become its third-largest source of imported oil.



NATURAL DISASTER

Somewhere above the west coast of Africa, unseen and unnoticed on a cloudless day, solar radiation penetrated the earth’s atmosphere and struck an expanse of surface of the southern Atlantic. The sun’s rays transferred their energy to an enormous number of water molecules, transforming liquid into gas and sending these molecules back into the sky as a gaseous vapor. Winds off the dry Sahara and the power of the earth’s rotation pushed these clouds of water, now coalescing into large bands of tropical moisture, westward, toward the American continent.

No one took notice until August 13, 2005, when a forecaster at the National Hurricane Center in Miami identified a mass of clouds over the tropical Atlantic, 1,800 miles east of Barbados. Ten days later, those same clouds once again caught the attention of the National Hurricane Center as they merged with another tropical storm and began to slowly churn. On Thursday morning, August 25, what had now been christened Hurricane Katrina made landfall near Miami Beach but without heavy devastation. The storm gained scope as it passed into the Gulf of Mexico.

By August 28, it had been transformed into a huge storm, a frighteningly ominous black mass, sprawling across the map—from the Yucatán Peninsula in Mexico to the southern United States. With winds as powerful as an EF4 tornado, Katrina was already one of the most powerful storms ever recorded by the National Oceanic and Aeronautics Administration.

America’s largest energy complex is in and around the Gulf of Mexico, and it was right in the bull’s-eye. Over more than six decades, thousands of oil and gas production platforms had been built offshore, in both shallow waters, within sight of shore, and deepwater far out at sea. At the time, almost 30 percent of U.S. domestic oil production and 20 percent of natural gas production came from the Outer Continental Shelf in the Gulf of Mexico. Almost a third of the country’s entire refining capacity—which turns the crude into gasoline, jet fuel, diesel, and other products—stretches along the shores of the Gulf.

Now, with Katrina approaching, the entire offshore industry went into emergency mode. Workers rushed to shut in the wells, secure the platforms, and activate automatic systems; they then hurriedly climbed into helicopters and raced the increasingly powerful winds back to shore.

As winds reached a peak strength near 175 miles per hour, Katrina hit the offshore energy complex and then slammed with devastating force and surging seas along the Louisiana, Mississippi, and Alabama coasts, blowing down buildings, washing away homes, overturning cars, ripping out power lines, flooding the entire region, and forcing 1.3 million to flee as temporary refugees.12

What ensued was a human tragedy of far-reaching proportions. The worst violence was reserved for New Orleans, where the levies were breached, opening the way for the waters to flood into streets and homes built below sea level, submerging large parts of the city under water, forcing up to 20,000 people to seek refuge in the Superdome and leaving more than 1,800 dead.

Rita, a new storm, also one of the most violent hurricanes ever recorded, similarly spawned in the South Atlantic, headed straight down the center of the Gulf. Once again, the industry sprang into emergency mode. Rita hit the platforms that had been spared on Katrina’s course and then tore through onshore oil refining centers, leaving some of them severely damaged and flooded.

Altogether, more than 3,000 platforms and 22,000 miles of undersea pipeline were in the direct path of the two storms. A total of 115 platforms were completely destroyed (most of them older ones, not built to 1988 standards); 52 were damaged, as were 535 pipeline segments of pipeline. Yet so effective were the environmental containment measures that the offshore production facilities did not leak. At the peak, the hurricanes knocked out 29 percent of total U.S. oil production and almost 30 percent of U.S. refining capacity. Months later, a significant part of the production and refining operations was still not back on line.13

Onshore, some 2.7 million people were left without electricity. With electric power down, the long-distance pipelines that carry gasoline and other refined products to the East Coast could not operate, and supplies became very tight in the Southeast and the Mid-Atlantic states. The gasoline may have been sitting there in the underground tanks at the stations. But without electric power there was no way to pump it out and into the tanks of the ambulances and police cars and fire engines and repair trucks so that they could carry out their rescue and repair missions amid the chaos and devastation.

Oil prices surged upward, both because of the disruption itself and as word of shortages sent tremors of panic and fears of gas lines through the public. The two storms sparked the largest disruption of oil supply in the history of the United States—a loss, at its peak, of 1.5 million barrels per day. Other countries took the unprecedented step of shipping emergency stocks of oil to the United States to help make up for the shortfall.



By 2006 production was recovering in the Gulf of Mexico, and supplies from offshore were once again making their way to consumers. But the market continued to feel the impact of the various losses of supply from the aggregate disruption. Moreover—in addition to Venezuela, Nigeria, and Katrina and Rita—another disruption was having a big impact on the world market. This one was in the very heart of the Middle East.


7

WAR IN IRAQ

In late 2002, Philip Carroll received a phone call from an official in the Pentagon. The Department of Defense was putting together an advisory group on oil, and Carroll was a sensible stop. Twice retired—first as CEO of Shell Oil USA and then the engineering company Fluor—Carroll came equipped with considerable international experience in the logistics and infrastructure of energy supply, as well as a reputation for diplomatic skill.

The questions were about how and what to plan for, in terms of oil, in the event of war. Two things were known: Iraq was highly prospective but had not really been explored since the 1970s and indeed was one of the least explored of all the major oil-exporting countries. And its industry was in poor condition, although no one really knew how poor. Carroll recommended that the DOD do an in-depth study and think through how the industry could be managed during postwar transition. A few months later, in early 2003, Carroll was formally asked if he would go out to Iraq as oil adviser following U.S. military action. He would become one of about twenty other senior advisers, each to advise and help direct an Iraqi ministry. By that time it was more than clear that the United States, along with Britain, Australia, Japan, and a score of other nations, in what was called “the coalition of the willing,” would shortly be going to war.



WHY THE WAR?

Iraq was an oil country. Its only export was oil. It was a nation defined by oil, and as such was a country of great significance to the global energy markets. But the ensuing war was not about oil. It resulted from a convergence of factors: the primary ones were the September 11, 2001, attack and its consequences, the threat of weapons of mass destruction, the way the 1991 war ended, the persistence of Saddam’s intransigent and ruthless rule, and the way in which analysis was, and was not, carried out.

Saddam had an “addiction to weapons of mass destruction,” as the head of the U.N. weapons inspection program put it on the eve of the war. For decades the Iraqi dictator had devoted a significant part of the country’s resources to the development of chemical, biological, and nuclear weapons. Despite his agreements with the United Nations after the Gulf War, both Western and neighboring countries believed that Saddam was continuing to develop WMD and that, if not restrained, would indeed acquire them. For instance, a 1998 National Intelligence Estimate reported that while Iraq’s WMD capability had been damaged by the Gulf War, “enough production components and data remain hidden and enough expertise has been retained or developed to enable Iraq to resume development and production of WMD . . . Evidence strongly suggests that Baghdad has hidden remnants of its WMD programs and is making every effort to preserve them.”

For the war planners, the likely use of such weapons by the Iraqi regime was a central factor in military planning, right up to and into the war itself, when, as a result of intercepted signals, some units carried bulky, cumbersome masks, impermeable gowns, and individual antidotes for chem-bio attacks. The postwar failure to find WMD capabilities, despite much effort, undermined the credibility of the decision making in the eyes of many. Some parts of the U.S. intelligence community—notably the State Department’s Bureau of Intelligence and Research and some in the CIA—had dissented, arguing the view that Saddam was probably still not pursuing the weapons but their arguments were discounted. The general view was that Saddam certainly was acting on his addiction. And there was within the U.S. intelligence community, the Middle East National Intelligence Officer Paul Pillar wrote, “a broad consensus that such programs existed.” There was, however, no agreement on their scale, timing, effectiveness, and utility.1



France and Germany—along with Russia—opposed the decision to go to war at every step. French president Jacques Chirac emerged as a particular foe to supporters of war, stating that “nothing today justified a war,” and that there was, in his view, “no indisputable proof ” of weapons of mass destruction. But Chirac was reflecting the view of the French intelligence service. “We had no evidence that Iraq had weapons of mass destruction,” recalled a senior French policymaker. “And we had no evidence that it did not. It may be that sanctions had worked much better than we had thought.”2

But Saddam made several miscalculations. He thought that the scale of the antiwar demonstrations in Europe would somehow ensure that the coalition would not actually invade. In what proved to be a massive miscalculation, he chose to convey ambiguity as to what he was doing about such weapons—and what he was covering up. To do otherwise, he apparently thought, would have weakened his regime vis-à-vis both Iran and domestic opponents. As he told his inner circle, “The better part of war was deceiving.” To an interrogator after the war, who asked him why the illusion, he had a one-word reply: Iran.

There was also the matter of assuming that others saw the world the way he did. It has been suggested that Saddam could never have believed that the 1991 coalition would have stopped short of Baghdad for something so mushy as the “CNN effect” on television viewers around the world and because of the fear of splintering the coalition. He would not believe it because he would not have acted on such reasons. It had to be because they feared that he had equipped his forces with chemical and biological weapons for the final defense of Baghdad. This was a very compelling reason to maintain the illusion.3

From the coalition side, there was good cause to proceed on a worst-case assumption: in the aftermath of the First Gulf War, it was discovered, with some shock, that the Iraqi regime was six to eighteen months away from a crude nuclear weapon. In retrospect, had Saddam not been so hasty but instead waited to invade Kuwait until 1993 or 1994 rather than 1990, he would have been in a much stronger position—equipped with some kind of nuclear weapon capability, and operating in a much tighter world oil market. All this would have reduced the flexibility of his opponents.

With the United States’ having underestimated Saddam’s capabilities once, the Bush administration was not going to repeat that mistake. There was all the more reason for such a response given 9/11 and in light of Saddam’s evident appetite for WMD and his hunger for revenge after 1991. Laura Bush later wrote of her husband, “What if he gambled on containing Saddam and was wrong?” Bush himself said, “That was not a chance I was willing to take.” This gamble seemed all the more risky in the state of permanent anxiety and tension that followed 9/11: after the attacks, a daily litany of reports flowed into the U.S. government about plots and attacks prevented, which only added to the constant apprehension about those plots that might not be nipped in time. “ We lived with threat assessments more disturbing than any ever spoken on the air,” said Laura Bush.

As a senior State Department official wrote to Secretary of State Colin Powell prior to the war, “September 11 changed the debate on Iraq. It highlighted the possibility of an Iraqi version of September 11, and underscored concerns that containment and deterrence will be unable to prevent such an attack.” Some argued that Iraqi intelligence had direct links to, and had perhaps even coached, Al Qaeda. Others said that such a link was highly dubious, indeed unlikely, and certainly unsubstantiated. “The intelligence community never offered any analysis that supported the notion of an alliance between Saddam and al Qaeda,” said Paul Pillar, the national intelligence officer. But that did not mean that, under the premise of “the enemy of my enemy is my friend,” there could not be cooperation in the future given their common enmity toward the West.4

Iraq was already at the top of the agenda of some of the senior policymakers prior to their taking office in the administration of George W. Bush. A policy review of options related to Iraqi sanctions had been launched in the summer of 2001. A few days after 9/11, at a meeting of President Bush with his senior advisers at Camp David, some sought to add Iraq as a target for counterattack, alongside Al Qaeda and Afghanistan. At that point Bush was firm in his rejection. In early October 2001, the U.S. ambassador to the United Nations was instructed to read “the toughest message I’d ever been asked to deliver” to Iraq’s ambassador, warning of the dire consequences for Iraq if it tried to take advantage of the 9/11 attacks. But it was not until 2002, fueled with the confidence from what seemed to be the very successful and very short campaign to evict the Taliban from Afghanistan that plans really began to congeal around a war with Iraq. And, in the aftermath of 9/11, it was going to be a preventative war—launched under what became known as the policy of preemption.5

To the inner circle of decision makers, 9/11 demonstrated the risks of not acting in advance to prevent Saddam’s acquisition of such weapons. Vice President Dick Cheney, who had been secretary of defense during the Gulf crisis, was central to the Iraq decisions. “As one of those who worked to assemble the Gulf War coalition,” he said in 2002, “I can tell you that our job then would have been infinitely more difficult in the face of a nuclear-armed Saddam Hussein.”

President Bush laid out the fundamentals of the new policy in a speech at West Point in June 2002. Traditional “deterrence” did not work against “shadowy terrorist networks.” And “containment” did not work “when unbalanced dictators with weapons of mass destruction can deliver these weapons on missiles or secretly provide them to terrorist allies.” The only answer was “preemptive action,” Bush added, “if we wait for threats to fully materialize, we will have waited too long.”

There was also a conviction among some that the existing political systems and stagnation in the Middle East were the breeding grounds for the likes of Al Qaeda and terrorism. A “new” Iraq could be the beginning of the answer. The skillful and clever Iraqi émigré Ahmed Chalabi, claiming to speak both for the exile community and those within the country, convinced some policymakers that an Iraq without Saddam would welcome the coalition as liberators and would quickly embrace representative democracy. These decision makers were convinced that “a pluralistic and democratic Iraq” would have a transformative effect in the Middle East, and in something akin to the fall of communism, set off a process of “reform” and “moderation” throughout the region.6

Contrary intelligence and analyses that did not fit this vision were pushed aside. Moreover, after thirty-five years of Baathist dictatorship, some could argue that, in any event, not much was really known about such “facts on the ground” as religious cleavages, sectarian rivalries, the importance of tribal loyalties, and the role of Iran. Those who did know something about these details, or who questioned the basic policy convictions, or who warned that these assumptions were too optimistic, were progressively squeezed out of the decisionmaking process.

The shock of 9/11 created a determination to demonstrate the strength of the United States, reassert a balance of power, and seize the initiative. There was also the desire to finish the “unfinished business” of 1991. After the 1991 Gulf War, Saddam conducted a brutal war against the disenfranchised Shia, which might have been prevented had the armistice not permitted Saddam’s forces to use helicopters in the south.

Some critics said that the war was conducted for the benefit of Israel. The elimination of Saddam’s military power would certainly be a boon for Israel, on which Iraqi Scud rockets had rained during the 1991 Gulf War. But Saddam was already contained and his military much weakened. Israel was much more worried about the Iranian nuclear program. As Richard Haass, the head of policy planning in the State Department, wrote, “The Israelis did not share the administration’s preoccupation with Iraq. Actually, it was just the opposite. The Israelis . . . feared that Iraq would distract the United States from what they viewed as the true threat, which was Iran.” Both Israeli officials, including the minister of defense, who happened to be Iraqi-born, and Israeli experts warned that the administration was greatly underestimating the postwar troubles that would await them in Iraq. As one of Israel’s leading specialists put it at a prewar conference in Washington, D.C., someone needed to tell the U.S. president that American forces would have to be in Iraq for up to five years and “they will not have an easy time there.”7



“OIL”

Oil did not play the same role as these other factors in defining policy. The significance of oil was because of the nature of the region—the centrality of the Persian Gulf in world oil and thus the critical importance of the balance of power in that region. It had been determined U.S. policy since Harry Truman to prevent the Persian Gulf and its oil from falling under the sway of a hostile power. But the possibility of a hostile power—Iraq—achieving dominance in the region, and thus over the region’s oil, loomed much larger during the Gulf crisis of 1990–91, when Iraq had conquered Kuwait and was threatening the Saudi oil fields, than in the run-up to the subsequent Iraq War. At the same time, in 2003, neither the Americans nor the British were pursuing a mercantilist 1920s-style ambition to control Iraqi oil. The issue was not who owned the oil at the wellhead, but whether it was available on the world market. Iraqi oil could be purchased on the world market, albeit managed under the U.N. sanctions program. Indeed, in 2001 the United States imported 800,000 barrels per day from Iraq. A democratic Iraq, it was certainly thought, would be a more reliable provider and, not being under sanctions, could expand its capacity. In the minds of some policymakers, noting the number of Saudi nationals involved in 9/11, the prospect of Iraq’s becoming a much larger exporter that would counterbalance Saudi Arabia was attractive, but this was far from a wellshaped—or well-informed—strategic objective.8

While a variety of ideas were being tossed around for the postwar organization of the industry, the clear policy determination was that the decisions about the future of Iraq’s oil would be made by a future Iraqi government. Nothing should be done to prejudice the prerogatives of the eventual government—even including the subject of OPEC membership—although a nongovernmental oil industry was seen as highly preferable in order to facilitate the introduction of the technology and the tens of billions of dollars of investment that the industry would need. Even in that case, however, a liberated Iraq, with its strong nationalist tradition, was likely to offer terms to investors that were as tough as those of any other petroleum-exporting countries, or tougher.

As war approached in 2002–3, the dominant attitude among the major international oil companies was one of skepticism and caution, and some alarm over the entire idea of war. Many of them were familiar with the region and feared a backlash. They were very doubtful that a stable, peaceful, new-style democracy could be quickly created from the wreckage of the Baathist state.

“You know what I’ll say to the first person in our company who comes to us with a proposal to invest a billion dollars in Iraq?” asked the CEO of one of the supermajors a month before the war. “I’ll say, ‘Tell us about the legal system, tell us about the political system. Tell us about the economic system and about the contractual and fiscal systems, and tell us about arbitration. And tell us about security, and tell us about the evolution of the political system. Tell us all those things, and then we’ll talk about whether we’re going to invest or not.’ ”9



“BEYOND NATION BUILDING”

The immediate issue in 2003 was the state of the Iraqi oil industry and the need to ensure that it operated to provide the revenues that the country required. That, however, would depend upon overall conditions in Iraq.

In overseeing the planning for the war, Defense Secretary Donald Rumsfeld was driven by an imperative—to prove that his design for the light and lethal “new model army” (to borrow a term from Oliver Cromwell) was the model for the army of the future. Rumsfeld was intent on prevailing over the uniformed leadership in the Pentagon, which he considered too cautious, too risk averse, and much too conservative. He was determined to overturn the “overwhelming force” doctrine championed by the then-chairman of the Joint Chiefs of Staff Colin Powell during the 1990–91 Gulf crisis (and now Secretary of State). Instead he wanted to demonstrate on the battlefield that smaller but highly skilled and disciplined, technologically advanced forces—with “speed and agility and precision,” in his words, were more than sufficient to win a swift victory. And, indeed, a very effective fighting force successfully demonstrated that capability on the battlefield in Iraq in 2003.

But war and postwar—defeating an army on the field and occupying a country—were two very different propositions. In cultural, logistics, training, and regional political terms, little had been done to prepare the military or the civilian arms of the U.S. government for an occupation of open duration. As it turned out, the troop levels required for a swift victory were much less, perhaps only a third, of what was needed after the war to occupy and stabilize the country. Shortly before the war, Army Chief of Staff Eric Shinseki had told a Senate committee that, based on U.S. experience ranging from post–World War II Germany to Bosnia in the 1990s, “several hundred thousand” troops—on the order of 260,000—was the right size. To say his comments were unwelcome would be an understatement. He was immediately disavowed and summarily retired. For good measure, the secretary of the army, who had supported his view, was also fired.

Rumsfeld was also determined to denigrate and banish the kind of “nation building” that had engaged U.S. forces in the Balkans during the Clinton administration in the 1990s. A month before the Iraq War, Rumsfeld delivered a speech titled “Beyond Nation Building,” in which he proclaimed Afghanistan a complete victory and contrasted that to what he said was the “culture of dependence” in the Balkans in the 1990s. The prime example that he cited to prove what was wrong with nation building was that of a driver who, while shuttling aid workers around Kosovo, earned more than a university professor. “The objective is not to engage in what some call nation building,” he declared. “If the United States were to lead an international coalition in Iraq,” he added, the objective would be “to leave as soon as possible.”

Afghanistan, he said, was the proof of the right way to do things. For what seemed to be the remarkably swift victory in Afghanistan in the autumn of 2001 had reinforced Rumsfeld’s assumptions—and the self-confidence that underlay them. As Rumsfeld put it, the Soviets had hundreds of thousands of troops in Afghanistan “for year after year after year,” while the United States, with “tens of thousands” did in “eight, nine, ten, twelve weeks what [the Soviets] weren’t able to do in years.” (Some pointed out that the USSR had also made short work of its invasion; it was in the long occupation that it failed.)

But the intervention in the Balkans in southeast Europe, as difficult as it was, was a much simpler situation than invading Iraq, a major Arab country in the Middle East that had been under tight dictatorial control for thirty-five years, and then proceeding to demolish all of its institutions, creating a giant vacuum, all under the premise that, as one U.S. official in Iraq put it, a “Jeffersonian democracy” would sprout almost overnight.

Rumsfeld’s position was reinforced by the U.S. commander Tommy Franks, who made clear that his intention was to pull U.S. troop levels down as fast as possible after the initial victory. Some advocates within the Bush administration were further propelled by the belief that the war would not be difficult—that a “lightning victory” would be followed by a quick withdrawal and the emergence of that new Iraqi democracy. With such a mind-set, not much thought needed to be given to the planning for what would happen after the war. 10

Nor was much thought given to the budgetary implications, for a quick war would surely also be cheap. As it turned out, the war was not quick and the subsequent occupation cost more than a trillion dollars in direct outlays.



NOT A CAKEWALK

Some voices in and around the U.S. government urged caution. The intelligence community on its own initiative developed an analysis of “the principal challenges that any postwar authority in Iraq” would likely face. Among the principal conclusions: Iraq was not a “fertile ground for democracy” and any transition would be “long, difficult, and turbulent.” The intelligence analysts could feel “a strong wind consistently blowing,” but it was not in their direction.

One of the most widely respected senior statesmen in Washington was Brent Scowcroft. He had been national security adviser to two former presidents—Gerald Ford and George H. W. Bush. He had worked closely with Dick Cheney when Cheney was secretary of defense during Desert Storm, and the current national security adviser Condoleezza Rice had been one of his deputies during the George H. W. Bush administration. Moreover, he spoke with considerable current authority. He was, after all, chairman of the President’s Foreign Intelligence Advisory Board. “An attack on Iraq at this time would seriously jeopardize, if not destroy, the global counterterrorist campaign we have undertaken,” he wrote in a Wall Street Journal article in August 2002. “If we are to achieve our strategic objectives in Iraq, a military campaign in Iraq would likely have to be followed by a large-scale, long-term military occupation.” He added, “It will not be a cakewalk.”

Scowcroft had been among the key policymakers in the decision not to go to Baghdad and depose Saddam during the Gulf War in 1991. In Scowcroft’s mind, it was not only because of the “CNN factor” and the likely splintering of the coalition. It was exactly because of the risks of a long occupation. During the 1991 war, the first President Bush had ordered up a study on the lessons from previous conflicts. “Don’t change objectives in the middle of a war just because things are going well,” was one of the prime lessons that Scowcroft had taken away from that study. “We learned that from Korea.” In 1991 Scowcroft had been convinced that capturing Baghdad would “change the character of what we were doing. We would become the occupiers of a large country. We don’t have a plan. What do we do ? How do we get out?” Those were the same questions that troubled Scowcroft in 2002.

The month following Scowcroft’s article, Richard Haass, head of policy planning in the State Department, wrote to Secretary of State Colin Powell. “Once we cross the Rubicon by entering Iraq and ousting Saddam ourselves, we will have much greater responsibility for Iraq’s future.... Without order and security, all else is jeopardized.”

The inadequacy of forces would have far-reaching impact on what would transpire over the next several years in Iraq, including the fate of its oil industry and the direction of the global oil market. And, in turn, what would happen to the oil industry would be central to Iraq’s future.

Iraq was a petro-state—about three quarters of its GDP was derived from oil around the time of the war, and 95 percent of government revenues would come from oil after the war. There were extremely optimistic expectations about how quickly production and exports could be restored and put on a growth track. Just prior to the war, Deputy Defense Secretary Paul Wolfowitz had declared that, with restored oil exports, Iraq “can really finance its own reconstruction.” He suggested that Iraq could soon be at 6 million barrels per day, double its current capacity.11

The war began on March 20, 2003, Baghdad time, some twelve years after the end of the first Gulf war. By April 9, U.S. forces had captured Baghdad. That same day, American soldiers helped Iraqis pull down the giant statue of Saddam Hussein in a downtown square, a scene reminiscent of the end of communism in Eastern Europe and one that seemed to promise that a “pluralistic and democratic Iraq” was at hand. Up to this point, things had gone according to plan.

But what would happen thereafter? General Franks, the U.S. commander, thought he had the answer. Not long after that initial victory, he posited U.S. forces would be drawn down to 30,000 by September 2003—a little more than a tenth of what, others argued, historical experience suggested was the prudent number. 12



THE OIL INDUSTRY: “DILAPIDATED AND DEPLORABLE”

The actual conditions of the oil industry ensured that it was in no condition to meet the heady prewar expectations. The industry was suffering from years of neglect and lack of investment. With the collapse of Saddam’s regime, communication had broken down, the country was in chaos, and no one was in charge. Most of the government buildings in Baghdad were looted and burned. A notable exception was the oil ministry, which was secured by units of the U.S. Army’s 3rd Infantry.

A few days after the fall of Baghdad, an experienced Iraqi technocrat showed up at the gate of the ministry and asked to speak to someone about getting the industry restarted. This was Thamir Ghadhban, who had been chief geologist and then head of planning for the Iraq National Oil Company. He eventually connected over a satellite phone with Phil Carroll, who at this point had not yet arrived in Iraq. After several conversations, Carroll finally asked Ghadhban if he would like to be “chief executive” of the Iraqi oil industry, with Carroll as chairman. They became the core of the team charged with getting the oil sector up again. It was hard going.

Although Iraq’s potential was considerable, it had not been seriously explored since the 1970s. Out of eighty discovered oil fields, only twenty-three were put into production. In 1979–80 the Iraqi oil industry had worked out a plan to raise output to six million barrels per day, but it had never been put into effect because of the Iran-Iraq War in the 1980s and then the 1990–91 Gulf crisis. Instead the industry went into a long decline. Now, after the invasion, workers were frightened to go to work because of the lack of security. Carroll and Ghadhban concluded that the physical capacity of the Iraqi industry was just under 3 million barrels a day, less than half of the 6 million barrels per day that had been cited as a “reasonable” target. They set a series of more-reasonable targets aimed at reaching that 3 mbd level by the end of 2004. 13

But the obstacles were formidable. Despite fears prior to the war that Saddam’s forces might blow up the wells and then set oil fields on fire, as they had done in departing Kuwait in 1991, the oil infrastructure, in fact, went through the war largely unscathed. Yet the overall conditions of the industry were, in Carroll’s words, “dilapidated and deplorable.” The underground reservoirs had been damaged by years of mismanagement. The sanctions had also had their impact. Equipment was rusting and malfunctioning. The machinery and systems were obsolete. The control room in the key Daura Refinery, near Baghdad, said Carroll, “was a time warp, right out of the 1950s.” Indeed, it had been installed by an American company in the mid-1950s, when Iraq was still ruled by a king. Environmental pollution was also widespread. From a practical standpoint, what kept the industry going was the skill of Iraqi engineers; they were geniuses at improvisation. But now, with the looting and the breakdown in the infrastructure of the country in the aftermath of the war, conditions were even worse. There were no phone links to the refineries or the oil fields. Even the normal tools for measuring the flow of oil were absent.

As Carroll saw it from his vantage point, there were three priorities for the restoration of the Iraqi oil industry—and the rest of the economy—“security, security, and security.” But none of the three was being met. The collapse of the organized state and the inadequacy of the allied forces left large parts of the country very lightly guarded, and the forces that were there were overstretched. 14 And what crippled everything else was the disorder that was the consequence of two decisions haphazardly made by the Coalition Provisional Authority, the entity set up to run the American-led occupation.



“DE-BAATHIFICATION” AND THE ARMY’S DISSOLUTION

The first was “Order #1—De-Baathification of Iraqi Society.” Some two million people had belonged to Saddam’s Baath Party. Some were slavish and brutal followers of Saddam; some were true believers. Many others were compelled to join the Baath Party to get along in their jobs and rise up in the omnipresent bureaucracies and other government institutions that dominated the economy, and to ensure that their children had educational opportunities in a country that had been ruled by the Baathists for decades. The very choice of the name of the edict showed its model—the denazification program in Germany after World War II. But that program had actually been applied quite differently in very different circumstances. Postwar Iraq was not postwar Germany, nor for that matter postwar Japan; and the Coalition Provisional Authority under L. Paul Bremer III was not the military administration of General Lucius Clay, America’s proconsul in postwar Germany, or the occupation in Japan under General Douglas MacArthur.

Initially, de-Baathification was meant only to lop off the top of the hierarchy, which needed to be done immediately. But as rewritten and imposed, it reached far down into the country’s institutions and economy, where support for the regime was less ideological and more pragmatic. The country was, as one Iraqi general put it, “a nation of civil servants.” Many schoolteachers were turned out of their jobs and left with no income. The way the purge was applied removed much of the operational capability from government ministries, dismantled the central government, and promoted disorganization. It also eliminated a wide swath of expertise from the oil industry. Broadly, it set the stage for a radicalization of Iraqis—especially Sunnis, stripped of their livelihood, pensions, access to medical care, and so forth—and helped to create conditions for the emergence of Al Qaeda in Iraq. In the oil industry, the result of its almost blanket imposition was to further undermine operations.

Aleksander Kwaśniewski, president of Poland, one of the countries in the “coalition of the willing,” argued with Defense Secretary Rumsfeld that the post–World War II German model was misunderstood and was being misapplied. Rather, said Kwaśniewski, the United States should pay attention to the more recent model from Eastern Europe, where reformist wings of the former communist parties had been successfully integrated into the new political systems—an approach that had brought both cohesion and stability. Kwaśniewski’s Polish troops were welcomed into the coalition, but not his argument.15

The U.S. occupation arrived with a mélange of many ideas and analogies and lessons—ranging from a vision of a “New Middle East” to remembered film images of the joyous French tossing flowers at the U.S. soldiers liberating them from Nazi rule. Whatever their actual relevance to conditions in Iraq in 2003, these ideas nevertheless shaped the approach on the ground after the hostilities. Important realities of culture, history, and religion featured less.

The problem of inadequate troop levels was compounded by Order #2 by the Coalition Provisional Authority—“Dissolution of Entities”—which dismissed the Iraqi Army. Sending or allowing more than 400,000 soldiers, including the largely Sunni officer corps, to go home, with no jobs, no paychecks, no income to support their families, no dignity—but with weapons and growing animus to the American and British forces—was an invitation to disaster. The decision seems to have been made almost off-hand, somewhere between Washington and Baghdad, with little consideration or review. It reversed a decision made ten weeks earlier to use the Iraqi Army to help maintain order. In bluntly criticizing the policy to Bremer, one of the senior U.S. officers used an expletive. Rather than responding to the substance of the objection, Bremer said that he would not tolerate such language in his office and ordered the officer to leave the room.

The immediate effect of the army’s dissolution was “incendiary,” and the consequences would prove enormous. A plan was formulated to create a new military, but the ambition was pathetically small—initially just 7,000 troops, later lifted to 40,000. A separate oil police had guarded the entire petroleum sector. That too was dissolved, adding to the risks for the workers in the oil industry and leaving the oil system even more vulnerable to pillage and sabotage. 16



RAMPANT LOOTING

Looting seemed to have been endemic in Iraq whenever authority broke down, going back to the 1958 revolution. Widespread looting had broken out in the aftermath of the 1991 Gulf War. Yet that risk too seems to have gone largely unnoted in the planning for the postwar situation. In 2003 looting and vandalism started immediately, and on a massive scale. There was no Iraqi Army to help prevent the looting, but now a large number of disgruntled and unemployed former soldiers. When it first began, Defense Secretary Rumsfeld dismissed it with the famous phrase “Stuff happens.” But it undermined the entire economy and highlighted the immediate lack of security. Two of the three sewage plants in Baghdad were so thoroughly looted that they had to be rebuilt. Even police stations were stripped of their electric wires, phones, light fixtures, and doorknobs. The oil industry was a prime target for this stripping. For instance, all the water pumps, critical to its operation, were stolen from the giant Rumaila oil field. Only by mustering his workers with their private arms did the head of the Daura Refinery succeed in standing off an army of looters at the refinery gate.

One of the most devastating impacts resulted from the wholesale looting of the electric system, on which the whole economy depended. Vandals took down the electric wires and pulled down the transmission towers and carted their booty off to Iran or Kuwait to sell as scrap. Even the computerized control room of the power station that controlled Baghdad’s electric grid was looted. This continuing disruption hit the oil industry hard. Without electricity, many of the oil fields and the three surviving refineries simply could not operate. It also crippled the irrigation on which agriculture depended. 17

Despite the looting, in the first several months or so, the occupation seemed to be making some progress. And, such was the ingenuity of the Iraqi oil people that, even in the face of deprivation, petroleum production was being restored and was actually ahead of target. By late summer, one could detect a certain note of triumphalism in some commentaries along with a growing confidence that Iraq really did presage a “new” Middle East.



INSURGENCY AND CIVIL WAR

But the occupation was not going according to plan. Rumsfeld had called the emerging insurgents “dead-enders.” But soon the U.S. commander in Iraq was talking about “a classical guerilla-type campaign,” and one of the senior British representatives was warning that “the new threat” was “well-targeted sabotage of the infrastructure.” Unemployment was running at 60 percent. Yet this unemployment, even with all its obvious risks, was not the top economic priority. Instead U.S. officials were focused on transforming Iraq, which had a totally state-dominated economy, into a free-market state, and doing so as rapidly as possible. Meanwhile, as one American general warned, “the liberators” were coming to be seen as something else—“occupiers.”

By the autumn of 2003, a new, more difficult phase was beginning. In due course, some would call it a civil war; others, an insurgency. As events played out, it would be both—a civil war between Shia and Sunnis, and an insurgency manned by Baathists and other Sunni activists, increasingly conjoined with foreign jihadists, abetted by unemployed young men (who, for a hundred dollars or even fifty dollars, could be hired to open fire on the Americans).18

By the spring of 2004 it would become a war against the occupation. Private militias were battling each other. Foreign jihadists were infiltrating into the country. Killings and revenge killings became a daily occurrence. Roadside bombs were becoming increasingly lethal. Car bombs were going off outside restaurants and offices. The leadership of the occupation withdrew into the safety of the heavily secured Green Zone. In May 2004, Jeremy Greenstock, who had been the senior British representative in Baghdad, lamented that Bremer, as the U.S. head of the occupation, did not have a plaque on his desk that said “Security and jobs, stupid.”19



THE INDUSTRY UNDER ATTACK

The oil industry was by then under attack. The former Baath Party put high priority on sabotaging the industry in a plan it called its Political and Strategic Program for the Armed Iraqi Resistance. Pipelines were being blown up; the export line, from Iraq into Turkey and to the Mediterranean, was shut by repeated bombings. The great expectations for the rapid expansion of Iraqi output were being punctured. Increasingly, the struggle was to maintain exports, especially in the north.

With his term as oil adviser over, Phil Carroll returned to the United States in the autumn of 2003. He was succeeded by Rob McKee, who had headed exploration and production for ConocoPhillips around the world.

“From the moment I got there, I saw that we didn’t have enough people on the ground to do what needed to be done,” said McKee. “Everything was broken. There was no police, no order, no courts, no infrastructure, and lack of electricity and water. Every day was a firefight, literally and figuratively. You’d come in the morning and get word that something had been blown up or looted. And then you’d figure out how to get that fixed before you could turn back to the longer-term, bigger issues.”

On top of that were the procedures of the U.S. government. “All the bureaucracy over bidding and contracting, all that slowed things down to a crawl,” said McKee. “That was the most frustrating thing I had to deal with.”20



THE IRAQI DISRUPTION

But such was the effort that output in 2004 did come close to the prewar levels in several months but, for the year—as a result of the violence and of the economic disarray and electricity shortages—was more than 20 percent lower. Exports were often disrupted. In what could have been a disaster, two suicide bombers in a motorized dinghy came close to blowing up part of the critically important offshore oil export terminal, but the craft exploded short of its target. Naval patrols, thereafter, were much tighter.

As the insurgency stepped up its attacks, the effect was being felt in the world oil market. “Last week’s attacks on key pipelines,” reported Petroleum Intelligence Weekly in June 2004, “have reduced exports of around 1.6 million barrels per day to zero with no immediate prospect that they will resume. While bad enough for Iraq, the export outage has left world oil markets with a tiny sliver of spare capacity concentrated in Saudi Arabia.... Global oil supplies have relatively little slack.”21

Again and again, exports were reduced or temporarily halted. In the years following the invasion, Iraqi production remained, at best, at only two thirds of capacity. It was not until 2009 that it was able, on an annual basis, to reach the prewar level of 2001, itself still considerably below the kind of potential that the country could achieve with investment. Before the war there had been high expectations about how Iraq’s growing output would contribute to stability in the world oil market. Instead Iraq’s beleaguered oil industry, producing well below its capacity, ended up contributing, on a sustained basis, to the toll of the aggregate disruption.



WHAT DID YOU LEARN?

In the autumn of 2003, when Phil Carroll, the first oil adviser, finished his tour, he stopped in Washington on his way back to Houston to visit the Pentagon. He was taken in to see Defense Secretary Rumsfeld. The secretary mainly had two questions for Carroll: “Did you enjoy it?” And, “What did you learn?”

There was not much more to the discussion than that. Carroll headed on home.


8

THE DEMAND SHOCK

On one still afternoon under an Oklahoma sun, neither a cloud nor an ounce of “volatility” was in sight. All one saw were the somnolent tanks filled with oil, hundreds of these tanks, spread over the rolling hills, some brand-new, some more than seventy years old, and some holding, inside their silver or rust-orange skins, more than half a million barrels of oil each.

Here, in a physical sense, was ground zero for the world oil price. For this was Cushing, Oklahoma, the gathering point for the light, sweet crude oil known as West Texas Intermediate—or just WTI. This was the price that one heard announced every day, as in “WTI closed today at . . .”

Cushing proclaims itself, as the sign on the main road into town says, the “Pipeline Crossroads of the World.” Through this quiet town passes the network of pipes that carry oil at the stately speed of four miles per hour from Texas and Oklahoma and New Mexico, from Louisiana and the Gulf Coast, and from Canada, too, into Cushing’s tanks. From there the oil flows onward to refineries where the crude is turned into gasoline, jet fuel, diesel, home heating oil—all the products that people actually use. But that is not what makes Cushing so significant. After all, there are other places where still more oil flows. Cushing plays a unique role in the new global oil industry because WTI is a preeminent benchmark against which other barrels are priced.

Soon after its discovery in 1912, the Cushing oil field achieved star status as “The Queen of the Oil Fields.” For a time, it produced almost 20 percent of all U.S. oil. The town of Cushing became one of the classic wild oil boomtowns of the early twentieth century, a place where, as one journalist wrote at the time, “any man with red blood gets oil fever.”1

After Cushing’s production declined, the town turned into a key petroleum pipeline junction. When the futures market started to trade oil futures in 1983, it needed a physical delivery point. Cushing , its boom days long gone, but with its network of pipelines and tank farms and blessed by its central location, was the obvious answer. As much as 1.1 million barrels per day passes in and out of Cushing—a great deal of oil in absolute terms, but equivalent to only about 6 percent of total U.S. oil consumption. That oil is the physical commodity that provides the “objective correlative” to the “paper” barrels and “electronic” barrels traded around the world.

A couple of other types of crudes are also used as markers, most notably Brent based on North Sea oil. Notwithstanding, prices for a good deal of the world’s crude oil are set against the benchmark of the WTI oil—also known as domestic sweet—sitting in those tanks in Cushing, making what is today a quiet little Oklahoma town, its fever long gone, one of the hubs for the world economy. But Cushing’s sedateness would stand in increasing contrast to the growing clamor and controversy that would be set off by the ascending price of oil in the global market. And what a clamor and controversy it was.



THE SURGE

The remarkable ascent of oil prices that began in 2004 ignited a furious argument as to whether the great surge was the result of supply and demand or of expectations and financial markets. The right answer is all of the above. The forces of supply and demand were very powerful. But over time they were amplified by the financial markets, embodying the new dynamics of oil.

The twenty-first century brought a profound reshaping of the oil industry—the “globalization of demand”—that reflected the reordering of the world economy. For decades, world consumption had been centered in the industrial countries of what was called the developed world—primarily North America, Western Europe, and Japan. These were the countries with most of the cars, most of the paved roads, and most of the world’s GDP. But, inexorably, that predominance was ebbing away with the rise of the emerging economies of the developing world and the growing impact of globalization.

Even though total world petroleum consumption grew by 25 percent between 1980 and 2000, the industrial countries were still using two thirds of total oil as the new century began. But then came the shock—the demand shock—that hit the world oil market in 2004. It propelled consumption upward, with—when combined with the aggregate disruption—a startling impact on price. It was also a shock of recognition for a new global reality. Between 2000 and 2010, world oil demand grew by 12 percent. But by now, the split between the developed and the developing world was 50–50.

As far back as 1973, it seemed that whenever an upheaval shook the world oil market, sending prices flying up, it was always some kind of “supply shock”—in other words, a disruption of the supply lines. This was true whether it was the oil embargo at the time of the 1973 October War, or the turmoil that came with the Iranian Revolution in 1978–79, or the Gulf crisis of 1990–91. The last significant demand shock had been the swiftly rising consumption in Europe and Japan at the end of the 1960s and early 1970s that had tightened the global supply-demand balance, setting the stage for the 1973 oil embargo. But that was a long time ago.

The new demand shock was powered by what was the best global economic performance in a generation and the shift toward the emerging market nations as the engines of global economic growth. Yet this had taken the world by surprise.

As 2004 began, the consensus expectation was still centered on what OPEC had taken as its $22-to-$28 price band. Market projections were for standard growth in consumption. In February 2004, OPEC ministers met in Algiers. “Every piece of paper we had,” said one minister, “indicated we are going into a glut.” Fearing a price “rout,” OPEC announced plans for a substantial production cut.

“The price can fall, and there is no bottom to it,” warned Saudi petroleum minister Ali Al-Naimi after the meeting. “You have to be careful.” He added, alluding to the Jakarta meeting and the Asian financial crisis, “We can’t forget 1998.”

Prices rose after the announcement of the production cut, as would have been anticipated. But then, unexpectedly, they continued to rise. The reason was not immediately obvious. Shortly after Algiers, Naimi went to China. What he encountered there convinced him that what was needed was not a cutback in world production but additional output. “We had seen the trend in China since the early 1990s,” said one Saudi. “But the cumulative effect was greater than any of us had realized. China was facing a shortage at the time. It was a structural change in the oil market.”2 China was on a red-hot growth streak. Economic growth in 2003 was 10 percent; in 2004, another 10 percent. Coal, the country’s main source of energy, simply could not keep up with the demands of China’s export machine. Compounding shortages, the railway system that carried the coal was overloaded and gridlocked, and long trains of coal cars were sidetracked on tracks across the country. Oil was the only readily available alternative for electricity generation, whether in power plants or diesel generators at factories. As an insurance policy, enterprises were also stockpiling extra petroleum supplies. Oil demand normally grew at 5 or 6 percent a year in China. In 2004 it was growing at an awesome 16 percent—a rate even more rapid than the overall economy. The world market was not prepared. By August headlines were reporting soaring prices in “the incredibly strong crude market.”

The world economy was moving into a new era of high growth. Between 2004 and 2008, Chinese economic growth averaged 11.6 percent. India, entering on the “growth turnpike,” would average over 8 percent during those same years. Strong global growth translated into higher oil demand. Between 1999 and 2002, world oil demand increased 1.4 million barrels per day. Between 2003 and 2006, it grew by almost four times as much—4.9 million barrels.

That was the demand shock.



THE TIGHTEST MARKET

All the elements were there for an oil boom: Spending to develop new supplies had been held in check by the trauma of the 1998 price collapse. But demand was now surging, and the disruptions—in Venezuela, Nigeria, and Iraq—were taking supplies off the market. The result would be a historically tight market.

Usually the global oil industry operates with a few million barrels of shut-in capacity—that is, production capability that is not used. Between 1996 and 2003, for instance, spare capacity had averaged about 4 million barrels per day. That shut-in capacity is a security cushion, a shock absorber to manage sudden surges in demand or some kind of interruption. One supplier country has made an explicit commitment to hold significant spare capacity. Saudi Arabia’s policy is to build and maintain spare capacity of between 1.5 and 2 million barrels per day in order to promote market stability. But for other countries, spare capacity is somewhat inadvertent. In 2005, however, the surge in demand and disruptions of supply shrank spare capacity to no more than a million barrels a day. In other words, the cushion was virtually gone. In terms of absolute spare capacity, the oil market was considerably tighter than it had been on the eve of the 1973 oil crisis. In relative terms it was even tighter, as the world oil market was 50 percent bigger in 2005 than in 1973.

In such circumstances the inevitable happens. Price has to rise to balance supply and demand by calling forth more production and investment on one side of the ledger, and on the other, by signaling the need for moderation in demand growth. By the spring of 2005, OPEC’s $22-to-$28 price band was an artifact of history. Many now may have thought that $40 to $50 was the “fair price” for oil. But that was only the beginning.

Other factors reinforced the rising price trend. In the aftermath of the 1998 price collapse, the industry had contracted, and then had continued to do so, on the basis of expectations for low prices. It was focused on keeping spending under tight control. As late as August 2004, the message from one of the supermajors was that “our long-term price guidelines are around the low $20s.” Or, as the chief financial officer of another of the supermajors put it, “ We remain cautious.” The industry continued to fear another price collapse that would undermine the economics of new projects. Investors exerted tremendous pressure on managements to demonstrate “capital discipline” and hold back spending. The reward was a higher stock price. And if companies did not heed the admonition, they would be punished with a lower stock price. As one such investor warned in mid-2004, if companies started increasing investment because of higher oil prices, “I’d look at that skeptically.”3



WHERE ARE THE PETROLEUM ENGINEERS?

“Capital discipline” translated into caution. The mantras were “take out costs” and “reduce capacity.” That meant reductions in people, drilling rigs, and everything else. In the late 1990s and early years of the 2000s, not only did many skilled people leave the industry, but university enrollments in petroleum engineering and other oil-related disciplines plummeted. If there were no jobs, what was the point?

But the sharp increase in demand in 2004 and 2005 delivered an abrupt jolt. No longer was the fear about going back to 1998 and a giant surplus that would tank prices. Now it was just the opposite—not having enough oil. Hurriedly switching gears, the industry went into overdrive to develop new supplies as fast as possible. Companies started competing much more actively for acreage and access to resources. As would be expected, the price of entry for new production opportunities went up. Nations were making more money than they had anticipated and thus were tougher in their financial demands on companies, and in this more competitive environment, they could get the terms they wanted. Competition for exploration and production opportunities was made even more intense by the arrival of new entrants in the international business—national oil companies based in emerging-market countries—which were willing to spend to gain access.

The industry was hamstrung in its ability to respond. Contraction had taken its toll. There were not enough petroleum engineers, not enough geologists, not enough drilling rigs, not enough pipe, not enough supply ships, not enough of everything. And so the cost of everything was bid up. Shortages of people and delays in the delivery of equipment meant that new projects took longer than planned, adding to the budget overruns.

On top of that, the cost of the inputs—such as the steel that went into platforms, and nickel and copper—was also rising dramatically as China’s appetite for commodities continued to draw in supplies from all over the world. This was the era of the great bull market for commodities.

The economic impact of all these shortages was stunning. Total costs for doing business ended up more than doubling in less than half a decade. In other words, the budget for developing an oil field in 2008 would have been twice what the budget for the same field would have been in 2004. These rising costs also, inevitably, contributed to the rising price of oil.4



“FINANCIALIZATION”

Then there was the matter of currencies; in particular, the dance between oil and the dollar. In this period, commodity prices would, in the jargon of economists, “co-move negatively with the U.S. dollar exchange rate.” Put more plainly, it meant that when the dollar moved down, oil prices moved up. Petroleum is priced in dollars. For part of this period the dollar was weak, losing value against other currencies. Traditionally during times of political turmoil and uncertainty, there is a “flight to the dollar” in the search for safety. But during this period of dollar turmoil, the flight was to commodities, most of all to petroleum, along with gold. Oil was a hedge against a weaker dollar and the risks of inflation. So as the “price” of the dollar went down against other currencies, particularly the euro, the price of oil went up.5

More generally the financial markets and the rising tide of investor money were having increasing impact on the oil price. This is often described as speculation. But speculation is only part of the picture, for oil was no longer only a physical commodity; it was also becoming a financial instrument, a financial asset. Some called this process the “financialization” of oil. Whatever the name, it was a process that had been building up over time.6



THE RISE OF OIL TRADING

Into the 1970s, there really was no world oil market in which barrels were traded back and forth. Most of the global oil trade took place inside each of the integrated oil companies, among their various operating units, as oil moved from the well into tankers, and then into refineries and into gasoline stations. Throughout this long journey, the oil remained largely within the borders of the company. This was what was meant by “integration.” It was considered the natural order of the business, the way the oil industry was to be managed.

But politics and nationalism changed all that. In the 1970s the oil-exporting countries nationalized the concessions held by the companies, which they regarded as holdovers from a more colonial era. After nationalization, the companies no longer owned the oil in the ground. The integrated links were severed. Significant amounts of oil were sold under long-term contracts. But oil also became an increasingly traded commodity, sold into a growing and variegated world oil market. Those transactions, in turn, were handled both by newly established trading divisions within the traditional companies, and by a host of new, independent commodity traders.

A change in the United States gave a further boost to this new business of oil trading. From the early 1970s onward, the federal government controlled and set the price of oil. These price controls were originally imposed during the Nixon administration as an anti-inflation initiative. They did succeed in creating a whole new federal bureaucracy, an explosion in regulatory and litigation work for lawyers, and much political contention. But the controls did little for their stated goals of limiting inflation—and did nothing for energy security. In 1979, after a bruising political battle, President Jimmy Carter implemented a two-year phase-out of price controls. When Ronald Reagan took over as president in January 1981, he speeded things up and ended price controls immediately. It was his very first executive order.

This shift from price controls to markets was not just a U.S. phenomenon. In Britain, the government shifted from a fixed price for setting petroleum tax rates to using spot price. As its benchmark, it used a North Sea stream called Brent.7



FROM EGGS TO OIL : THE PAPER BARREL

Now oil was becoming “just another commodity.” Although OPEC was still trying to manage prices, it had a new competitor—the global market. And, specifically, a new marketplace emerged to help buyers and sellers manage the risk of fluctuating prices. This was the New York Mercantile Exchange—the NYMEX. The exchange itself wasn’t exactly new. It had actually begun its life as the Butter and Cheese Exchange, founded in 1872 by several dozen merchants who needed a place to trade their dairy products. It soon expanded its offerings and became the Butter, Cheese, and Egg Exchange. By the 1920s, in a little-noticed innovation, egg futures were added to the trading menu, at what was now the more grandly renamed New York Mercantile Exchange.

By the 1940s the NYMEX was also the trading place for a motley group of other commodities, ranging from yellow globe onions to apples and plywood. But the exchange’s mainstay was the Maine potato. Yet potatoes had progressively less skin in the game: for in the late 1970s, scandals hit the Maine potato contract, including the mortifying failure of the potatoes to pass the basic New York City health inspection. It looked like the exchange was going to go under. Just in time, the NYMEX started trading futures contracts in home heating oil and gasoline. This, however, was only the beginning.

March 30, 1983, was the historic day when the exchange began trading a futures contract for light, sweet crude, tied to a stream called West Texas Intermediate—WTI—and linked back to those tanks in Cushing , Oklahoma. Now the price of oil was being set by the interaction of the floor traders at the NYMEX with other traders and hedgers and speculators all over the world. Thus was the beginning of the “paper barrel.” As technology advanced over the years, the price would be set not only daily and hourly, but eventually on a second-by-second basis.



HEDGERS VERSUS SPECULATORS

Today’s futures markets go back to the futures markets for agricultural products established in the nineteenth century in Midwestern cities of the United States. By availing himself of the futures market, a farmer planting his spring wheat could assure himself of his sales price for the following fall. He might lose the upside if the price of wheat shot up. But by using futures, he avoided financial ruin in case a bumper crop tanked the price.

The petroleum futures market on the NYMEX now provided what is called a “risk-management tool” for people who produced oil or who used it. An airline would buy contracts for oil futures to protect itself against the possibility of rising prices of the physical commodity. It would put down the fraction of a cost of a barrel for the right to buy a hundred contracts—equivalent to 100,000 barrels—a year or two years from now at the current price. The price of oil—and jet fuel—might go up 50 percent a year from now. But the futures contracts would have gone up by about the same value, and the airline could close out its position, accruing the same amount as the price increase—minus the cost of buying the futures. Thus the airline would have protected itself by buying the futures, although putting the hedge in place did cost money. But that cost was, in effect, what the airline was willing to pay to insure itself against a price increase.

For an airline, or an independent oil producer protecting itself against a fall in the price, or a home-heating oil distributor worrying about what would happen in the winter, someone needed to be on the other side of the trade. And who might that person be ? That someone was the speculator, who had no interest in taking delivery of the physical commodity but is only interested in making a profit on the trade by, as the NYMEX puts it, “successfully anticipating price movements.” If you wanted to buy a futures contract to protect against a rising price, the speculator would in effect sell it. If you wanted to sell to protect yourself against a falling price, the speculator would buy. The speculator moved in and out of trades in search of profits, offsetting one position against another. Without the speculator, the would-be hedger cannot hedge.8

Often, it seems, the word “speculator” is confused with “manipulator.” But “speculation” is, in its use here, a technical term with rather precise meaning. The “speculator” is a “non-commercial player”—a market maker, a serious investor, or a trader acting on technical analysis. The speculator plays a crucial role. If there is no speculator, there is no liquidity, no futures market, no one on the other side of the trade, no way for a hedger—the aforementioned airline or oil producer or the farmer planting his spring wheat or the multinational company worried about currency volatility—to buy some insurance in the form of futures against the vagaries of price and fortune.

Futures and options trading in oil rose from small amounts in the mid-1980s to very large volumes. By 2004 trading in oil futures on the NYMEX was 30 times what it had been in 1984. Similar growth was registered on the other major oil futures market. This was the ICE exchange in London, originally called the International Petroleum Exchange, where Brent, the North Sea oil stream, is traded. The Brent contract in London and the “sweet crude” contract in New York became the global standards for oil against which other crudes were benchmarked. WTI was oriented toward North America; Brent, toward the Eastern Hemisphere. Later a Dubai contract was introduced in the Middle East.

After the stock market bust of 2000, investors wanted to find alternative investments. It was observed at the time that the prices of commodities did not move in coordination with other investment choices; that is, they were not correlated with stocks and bonds. So according to theory, if the value of a pension fund’s equity holdings declined, the value of the commodities would not. They might even go up. Thus commodities would protect portfolios against declines in stock markets and help pension funds to assure the returns on which their retirees depended. In the years that followed, diversification into commodities became a major new investment strategy among many pension funds.

Investors were trying to purchase other forms of “insurance” as well. A large European state pension fund, for instance, was buying futures contracts to protect its portfolio against, as its chief investor officer put it, “a conflict in the Middle East”—which really meant a war involving Iran. Were such an event to occur, the value of the fund’s equity holdings would likely drop dramatically, while oil prices would likely soar. This pension fund thought it was acting as a prudent investor, hedging its portfolio against disruption and allocating among assets to protect its retirees. But, by the definition of the futures market, it was a speculator.9



THE “BRICs”: THE INVESTMENT OPPORTUNITY OF A GENERATION

Putting money to work in oil-based financial instruments was also seen as a way to participate on the greatest economic trend of a generation: globalization and economic growth in China, India, and other emerging markets.

In November 2001 an economist at Goldman Sachs, Jim O’Neill, put out a research paper hatching a new concept: “the BRICs”—Brazil, Russia, India, and China. These four large-population economies, he said, were destined to grow faster than the main industrial economies. He made the startling prediction that within a few decades they would, as a group, overtake the combined GDP of the United States and the world’s five other largest economies.

O’Neill came to the BRICs idea in the aftermath of 9/11. “I felt that if globalization were to thrive, it would no longer be American-led,” he said. “It had to be” based on the reality that “international trade lifts all.” There was also what he called the “odd insight” that provided a lightbulb moment: on flights to China, he had noticed continuing improvements in the standards and quality of service, rising toward world levels. “Rightly or wrong, I associated that with China’s involvement.” Something new was happening in the world economy.

Initially, many people found the whole concept of BRICs wacky. They shook their heads and asked what these diverse countries could possibly have in common. “They thought it was just some kind of marketing gimmick,” said O’Neill. But by 2004 the concept of BRICs was providing a different—and powerful and compelling—framework for looking at the world economy and international growth. Competing banks, which had previously made fun of the idea, were now launching their own BRIC funds. And in the ultimate stamp of approval, leaders of the four BRIC-anointed countries eventually started to meet for their own exclusive BRICs-only summits.

“BRICS,” said the Financial Times, became “a near ubiquitous term, shaping how a generation of investors, financiers and policymakers view the emerging markets.” Investors started to buy equities linked to the BRICs. They also bought financial instruments linked to oil. For the growth of these countries—especially the “C,” China—was driving the demand for commodities and thus prices. Thus for investors—whether running hedge funds or pension funds, or retail investors—the commodity play was not just about oil itself, but about the booming economies that were using more and more oil.10



TRADING PLACES

And now there were a lot more people in the oil market—the paper barrel part of the market—investing with no intention nor any need of ever taking delivery of the physical commodity. There were pension funds and hedge funds and sovereign wealth funds. There were the “massive passives”—the commodity index funds, heavily weighted to oil and with all the derivative trading around them. There were also exchange-traded funds; there were high net-worth individuals; and there were all sorts of other investors and traders, some of them in for the long term, and some of them very short term.

Oil was no longer just a physical commodity, required to fuel cars and airplanes. It really had become something new—and much more abstract. Now these paper barrels were also, in the form of futures and derivatives, a financial instrument, a financial asset. As such, prudent investors could diversify beyond stocks, bonds, and real estate, by shifting money into this new asset class.

Economic growth and financialization soon came together to start lifting the oil price higher. With that came more volatility, more fluctuations in the price, which was drawing in the traders. These were the nimble players who would, with hair-trigger timing, dart in and out to take advantage of the smallest anomalies and mispricings within these markets.

This financialization was reinforced by a technological push. Traditionally, oil had been traded in the pit at the NYMEX by floor traders, wearing variously colored jackets, yelling themselves hoarse, wildly waving their arms and making strange hand gestures, all of which was aimed at registering their buys and sells. This system was called “open outcry,” and it was enormously clamorous.

But around 2005 the importance of the floor traders began to decline rapidly with the introduction of the electronic trading platforms, which directly connected buyers and sellers through their computers. Now it was just push a button and the trade was done, instantaneously. Even the “button” was a metaphor, for frequently the trade was executed by a commodity fund’s algorithmic black box, operating in microseconds and never needing any sleep, let alone any human intervention once it had been programmed. The paper barrel had become the electronic barrel.11



OVER THE COUNTER

Futures contracts on commodity exchanges were only part of the new trading world. There were also over-the-counter markets, which did not have the regulatory and disclosure requirements of the futures market. Those who were critical of them dubbed them the “dark markets” because of this lack of regulatory oversight and transparency, and because they were suspicious of how they worked and of their impact. These were, after all, a form of financial derivative—a financial asset whose price is derived from one or more underlying asset. The cumulative risk and systemic impact of such derivatives could be very large because of their leverage, complexity, and lack of transparency.

The over-the-counter off-exchange markets were the place for tailored, bespoke transactions where participants could buy oil derivatives of one kind or another, specifically designed to meet a particular market need or investment strategy. Banks became the “swap dealers,” facilitating the swapping of one security, currency, or type of interest rate for another between investors. They would then turn around and hedge their risks in swap deals on the futures markets. The over-the-counter market began to grow very substantially around 2003 and 2004. These markets had several attractive traits. It could be less expensive for hedgers to go to the over-the-counter market, as the costs might be lower and more predictable. They could make deals that were tailored to their particular needs and specifications and timing. For instance, someone might want to hedge jet fuel in New York Harbor, and WTI at Cushing was not a clear enough approximation in its pricing. It was also possible to do much larger deals without calling attention to oneself and thus prematurely forcing the price up or down, depending on the nature of the hedge.

Overall, more and more money was coming into the oil market, through all the different kinds of funds and financial instruments. All this engendered increased activity, and more and more “investor excitement,” to borrow a phrase from Professor Robert Shiller, the student of financial bubbles and the explicator of the term “irrational exuberance.” Traders saw momentum in the market, which meant rising prices, and as they put money to work and prices went up, it added to the momentum, providing yet more reason to put more money to work, further fueling the momentum. And so prices kept going up.



THE BELIEF SYSTEM

There was method in all this momentum, a well-articulated belief system that explained rising prices. Or rationalized them. In his studies of bubbles and market behavior, Shiller refers to the common characteristic of what he calls “new era thinking”—the conviction that something new and different has arrived that justifies a rapid rise in asset prices in a particular market. New era thinking has been a consistent feature of bubbles—in stock markets and real estate and many other markets—going back to tulips in Holland in the early 1600s and the South Sea land bubble in the early 1700s. “A set of views and stories are generated that justify continuation of the bubble,” says Shiller. “But it’s not perceived as a bubble.” 12

In the case of the oil market, an explanatory model, a set of new-era beliefs, took particular hold on the financial community with an almost mesmerizing effect. The beliefs came in the form of catechisms:

That oil was going to be in permanently short supply (just the opposite of a decade earlier).

That the world was running out of oil.

That China was going to consume every barrel of oil that it could get its hands on—and then some.

That Saudi Arabia was misleading the world about its oil reserves, and that Saudi production, the great balancer of world markets, would soon begin to decline.

That the world had reached, or would soon reach, “peak oil”—maximum output—and the inevitable decline in output would begin even as the world wanted more and more oil.

The last—“peak oil”—was the great unifying theme that tied all the rest together. As prices climbed, this view became more and more pervasive, especially in financial markets, and in a great feedback loop, reinforced bullish investor sentiment and helped to push prices up further.

For all the above reasons together, it made sense, powerful sense, for prices to keep going up. That, after all, is what the most publicized predictions said would happen. Data that did not fit the model—for instance, an analysis of eleven hundred oil fields that failed to find a “peak” on a global basis—were disregarded and dismissed.13



DOES PRICE ACTUALLY MATTER?

At this point the oil world split in two. Some thought that prices did not matter, and some thought they did. Those who thought “not” worked on the assumption that prices would continue to go up, for all the reasons noted above, with little impact on consumers and on producers—and on the global economy.

Those who believed that prices still mattered were pretty sure that the impact would be felt, though perhaps not immediately. But rising prices would eventually do what they always did—encourage more supply, more investment, stimulate alternatives, while damping down demand. They also feared that rising prices would have a wider cost in terms of reduced economic growth or even recession, which in turn would also bring down demand.

Yet that latter position seemed to be losing the argument. On the first trading day of 2007, WTI had closed at $61.05. A year later, on the first day of trading, January 2, 2008, oil briefly hit $100 and then slid back. A month later it really broke through $100. And kept going. The oil fever that had struck Cushing, Oklahoma, after 1912 was coming back in 2008 in the form of a global epidemic that was sweeping the planet. 14

It was in the last part of 2007 and around the beginning of 2008 that the forces driving the oil price up shifted decisively from the fundamentals into something else—“hyperappreciation in asset prices.” Or what is more colloquially known as a bubble.



“GOING TO EXPLODE”

Even the biggest, most sophisticated institutional investors were embracing commodities. In February 2008, CalPERS, the California State retirement fund, the largest pension fund in the United States, announced that it now deemed commodities part of a distinct asset class. As a result it was going to increase its commitment to “commodities” as much as sixteenfold. “The actual importance of the energy and materials sector we believe is going to explode,” CalPERS’s chief investment officer had previously explained.

Gasoline prices in the United States finally broke through the $3 a gallon barrier in February 2008 and headed higher. By April 2008, 70 percent of Americans described higher gasoline prices as a financial hardship and blamed “greedy oil companies for” “gouging the public.” A month later gasoline breached $4 a gallon. The public was agitated and enraged; gasoline prices dominated the news; they looked to become an issue in the presidential campaign. They had already become a subject of a host of congressional hearings. In a deliberate replay of the political theater that had followed the 1973 oil crisis, oil company executives were summoned to congressional hearings, made to raise their right hands and put under oath, and then interrogated for hours. But now the executives were no longer alone. Fund managers and executives from the financial industry were also called to testify. The Commodity Futures Trading Commission, which regulates futures, was charged with assessing whether new controls on speculators were required.

Still the drumbeat of predictions continued, as though casting and recasting a spell. A Wall Street analyst predicted that the coming superspike made $200 oil “increasingly likely” within the next two years.

That forecast struck terror into the heart of the airline industry, which was reeling from the effects of the surge in jet fuel, made even worse by constraints in the refining system. “Scary” was the one-word reaction of David Davis, Northwest Airlines’ chief financial officer at the time. “We kept saying to ourselves that the price had to fall back, but it kept going up. The market was looking for any opportunity to take the price up.”15



“YOU NEED BUYERS”

In the middle of May—with oil prices now the top domestic political issue in the United States—President George W. Bush went to Saudi Arabia. There at a meeting at the ranch of Saudi King Abdullah, Bush talked about the risks to the world economy of rising prices. He urged the Saudis to lift output to help cool the fever. He did not get the answer he wanted. The Saudis had already upped production by 300,000 barrels per day but were having trouble finding customers. “If you want to move more oil, you need a buyer,” said Saudi petroleum minister Ali Al-Naimi. After the meeting, the president’s national security assistant Steven Hadley ruefully commented, “There is something going on in the oil market that is much more complicated than just turning on the spigot.” There was no relief after Riyadh. The price of oil kept going up. “One concern that has prompted traders to bid up oil prices,” reported the Wall Street Journal from Jeddah, “is Saudi Arabia’s long-term production capacity. Some analysts believe the kingdom’s best fields could hit a production peak in the years ahead.”

At almost exactly the same time, one of the most prominent Wall Street oil analysts added to the fever with a report declaring that a “structural re-pricing” of oil—reflecting long-term expectations for shortage of oil and “continued robust demand from the BRICS”—meant a “structural bull market” on top of the “super cycle” that would take prices “to ever-higher levels.” The surge continued. By the end of May, oil prices had hit $130. New cars sales in the United States were plummeting.16



“OIL DOT-COM”

A few contrarian voices on Wall Street warned that these prices had become seriously divorced from reality. Edward Morse, a veteran analyst, in a paper titled “Oil Dot-com,” wrote: “As during the dotcom period, when ‘new economy’ stocks became popular, a growing band of Wall Street analysts who are significantly raising” their forecasts were “partially responsible for new investor flows, driving... prices to perhaps unsustainable levels.” He continued: “We are seeing the classic ingredients of an asset bubble. Financial investors tend to ‘herd’ and chase past performance.... But when peak prices hit, they are also likely to fall precipitously. That’s the way cyclical turning points always occur.” But the analysis could only go so far. “Getting that timing right is the difficult part,” he added.

Morse did not sway many people. Some of his clients did not merely disagree ; they literally shouted at him that he was wrong. The price continued its sharp ascent. Ever more money flooded into the market on the premise that prices would climb still higher. “Perhaps the biggest ramification of current oil prices is the stoking of fears over ‘peak oil,’ ” said one petroleum industry publication. “This mindset has spurred investors to buy.” 17



“IT NEEDS TO STOP”

There seemed no respite. High gasoline prices—combined with the imminence of Memorial Day and the opening of driving season—infected the entire nation with a virulent case of road rage. That made it an “ideal time,” said the New York Times, “for Congress to show its solidarity with angry American motorists.” At one hearing, a congressman bluntly told oil company executives, “You are gouging the American public and it needs to stop.” Another announced that the industry should be nationalized outright.

At a hearing on the other side of Capitol Hill, a senator asked the empanelled oil company executives, “Does it trouble any of you when you see what you are doing to us?” One executive tried to frame a reply: “I feel very proud of the fact that we are investing all of our earnings. We invest in future supplies for the world, so I am proud of that.”

“You,” snapped another senator, “have no ethical compass about the price of gasoline.”18



CHINA IN 2014

In other parts of the world, high prices were seen as a boon. Every year in June in St. Petersburg, during the white nights, when it is light even at midnight, the Russian government hosts its own version of Davos—the St. Petersburg Economic Forum. The setting is the sprawling modernistic Lenexpro congress center that juts out into the Gulf of Finland and looks toward the Baltic Sea. In June 2008 Russia was booming from the high oil and natural gas prices, which was reflected in the buoyant atmosphere of the forum. Wall Street may have been showing signs of growing distress. But, seen from St. Petersburg , that was only further reason for the global financial markets to become more anchored in Russia and the other BRICs.

Over coffee between one of the sessions, the head of a very large commoditiestrading firm was asked why he thought prices were still going up. He had a very clear explanation: As markets generally do, he replied, the oil market was anticipating what would happen in the future. In this case, it had pulled forward into 2008 the prices that would be associated with China’s huge oil demand in the year 2014. It just seemed so obvious.

A few days later, the head of one of the world’s largest state-owned energy companies declared that oil would hit $250 a barrel in the “foreseeable future.” Were that to happen, a leader of the travel industry said in reply, the airline industry would collapse and would have to be nationalized. Otherwise there would be no planes in the air.

On June 15, oil prices reached $139.89. The airline industry certainly had its back against the wall. In earlier years, fuel prices had been about 20 percent of operating costs; now they were up around 45 percent, bigger even than labor costs. Bankruptcies seemed inevitable—the only way out.19



JEDDAH VERSUS BONGA

On Sunday, June 22, a hastily organized conference involving 36 countries convened in Jeddah, Saudi Arabia, at the invitation of King Abdullah. The Saudis, among others, were acutely concerned with what oil prices would do to the demand for oil and to the world economy, in which they had a very significant stake.

To open the conference, King Abdullah and British Prime Minister Gordon Brown entered, side by side, to the music of a military band. But there was little harmony. The producers blamed the prices on “speculators” and said that there was no shortage of crude oil. The consuming countries blamed the prices on a shortage of crude oil. The Saudis announced that they would put another 200,000 barrels a day into the market if they could find buyers. But that would take time. The next morning the price in Singapore opened higher than it had closed in New York the previous Friday.

Within hours of the Jeddah meeting, a dramatic reminder of the physical risks to supply shook the market and added to the widespread anxiety. One third of Nigeria’s production was already shut in by violence and criminal attacks. But surely it was thought, the new multibillion-dollar offshore projects were secure from assault, insulated from violence by their distance from land. That sense of security was misplaced.

Members of MEND, the Movement for the Emancipation of the Niger Delta, moving fast in heavily armed speed boats, evaded such security as there was and launched an attack on Bonga, the most prominent of all the platforms, 70 miles from the shore. They managed to climb onto the platform, but they were repelled before they could blow up the computerized control room. It was a close call, and a very scary one. The Bonga attack sent new shockwaves through the market. In an e-mail to journalists, a spokesman for MEND warned, “The location for today’s attack was deliberately chosen to remove any notion that offshore oil production is far from our reach.” Bonga trumped Jeddah and prices continued to go up.20

The physical market had turned. Although hardly recognized, the demand shock was over. World oil demand was going down and supply was increasing. Spare capacity—the gap between world capacity and world demand—was beginning to widen. But none of that seemed to matter. Prices continued to rise. “I kept staring at my Bloomberg , looking at the prices all the time,” recalled the CFO of Northwest Airlines. “It was unbelievable.”

And it was all happening very fast. “This is like a highway with no cops and no speed limits, and everybody is going 120 miles per hour,” lamented one senator, citing a Wall Street analyst, at a hearing on June 25. By the beginning of July, prices exceeded $140. Prediction after prediction reinforced that conviction, as the crescendo of incantations about higher prices reverberated around the world.21



BREAK POINT

In truth, the gears had already started to grind in the other direction. The break point was at hand. Prices did matter after all. They mattered economically—and, as the public’s ire and fear rose, they mattered politically.

The most immediate evidence of the break point would be in the decisions by energy users—whether large industrial firms, which found new ways to reduce energy use; or airlines, which cut back on the number of planes in the air; or individual consumers, who could change their behavior.

And that consumers were doing. They were driving less. In June 2008 California motorists used 7.5 percent less gasoline than in June 2007. Consumers were also voting with their feet. They were no longer walking into auto showrooms, and when they were, they were steering clear of SUVs. They wanted fuel-efficient vehicles, if they wanted anything at all. That left Detroit, which had focused on the popular SUVs, scrambling to try to gear up to produce the cars that consumers now desired and that would meet the new fuel-efficiency targets—something that would take billions of dollars and several years to implement. The torrid romance with the SUV had suddenly gone cold. The oversize Hummers were becoming targets of vandalism.22

Meanwhile, oil companies were dramatically increasing their spending to develop new supplies, although they had to contend with the big increase in costs. The market was no longer tight. World oil supply in the first quarter of 2008 was more than a million barrels higher than it had been in the first quarter of 2007. In June 2008 U.S. oil demand was a million barrels less than it had been in June 2007. These prices were providing both a political and commercial stimulus to the longer-term development of renewables and alternatives.



CHANGING THE CAR FLEET

The turmoil in the market had a major impact on public policy and on the politics of energy, and nowhere more significantly than in regard to the American automobile.

The United States has the world’s largest auto fleet—about 250 million out of a global total of 1 billion. Despite growth in emerging markets, one out of every nine barrels of oil used in the world every day is burned as motor fuel on American roads. In 1975, during the first oil crisis, fuel-efficiency standards were introduced, requiring a doubling from the then-average of 13.5 miles per gallon to 27.5 miles per gallon over ten years. And there the standards sat for more than three decades, with some minor tinkering around the edges.23

But circumstances were changing. In his 2006 State of the Union Address, President George W. Bush denounced what he called the nation’s “addiction to oil.” And new players became engaged. The most notable group was the Energy Security Leadership Council, an affiliate of another group, SAFE—Securing America’s Future Energy. The council was chaired by P. X. Kelley, a former commandant of the Marine Corps, and Frederick Smith, the founder and CEO of FedEx. The members were retired military officers and corporate leaders, not exactly fitting the traditional mode of environmentalists and liberals who had traditionally campaigned for higher fuel-efficiency standards.

In December 2006 the council issued a report advocating a balanced energy policy. Raising auto-fuel standards was the first chapter. Five weeks later, to Detroit’s shock and notwithstanding opposition from within his own administration, Bush used his 2007 State of the Union to endorse a fuel-efficiency increase. A week later, Bush met with some of the council’s members. The president made clear the geopolitical thinking behind his energy policies. He wanted, he said, to get Iranian president Mahmoud Ahmadinejad and Venezuelan president Hugo Chávez “out of the Oval Office.”

The Council took its campaign to the Senate. At one hearing, council member Retired Admiral Dennis Blair, former commander of the Pacific Fleet (and later director of National Intelligence in the Obama administration), argued that excessive dependence on oil for transportation was “inconsistent with national security” and that nothing would do more than “strengthen fuel economy standards” to reduce that dependence.24

Fuel-efficiency standards were no longer a left-right issue. Now they were a national security and a broad economic issue. New standards flew though both houses of Congress. In December 2007, almost exactly one year to the day after the Energy Security Leadership Council’s report, Bush signed legislation raising fuel-efficiency standards—the first such increase in 32 years.

Of course, the new fuel-efficiency standards would take years to make a sizable impact. Automakers would have to retool, and then, in normal years, only about 8 percent of the vehicle fleet turns over annually. But when their impact was felt, it would be very large.



THE GREAT RECESSION

What was happening in the economy would also lower the demand for oil. The Great Recession, at least in the United States, is now reckoned to have begun in December 2007, well before most anybody had recognized it. It was primarily a credit recession, the result of too much debt, too much leverage, too many derivatives, too much cheap money, too much overconfidence—all of which engendered real estate and other asset bubbles in the United States and other parts of the world.

But the surge in oil prices was an important contributing factor to the downturn. Between June 2007 and June 2008, oil prices doubled—an increase of $66—in absolute terms, a far bigger increase in oil prices than had ever been seen in any of the previous oil shocks, going back to the 1970s. “The surge in oil prices was an important factor that contributed to the economic recession,” observed Professor James Hamilton, one of the leading students of the relation between energy and the economy. The oil price shock interacted with the housing slowdown to tip the economy into a recession. The sudden increase of prices at the pump took purchasing power away from lower-income groups, making it more difficult for many of them to make payments on their subprime mortgages and their other debts. The higher cost of the gasoline they needed to get to work meant trade-offs in terms of what else they could spend elsewhere. The effects also showed up, as Hamilton has noted, in “a deterioration in consumer sentiment and an overall slowdown in consumer spending.”

As gasoline prices rose, car sales nosedived. Discounting and rebates by auto dealers did little good. June 2008 was the worst month for sales for the auto industry in seventeen years.

“The auto industry was under siege,” said Rick Wagoner, the former CEO of General Motors. “While we had a comprehensive scenario planning process at GM, we had no scenarios in which oil prices went up so much, so fast. People weren’t coming into showrooms as oil prices skyrocketed in part because their disposable incomes were going down. The rate and size of the decline in auto sales was unprecedented. Demand was collapsing.” Wagoner continued, “The only question was how high oil prices would go and when they came down, to what level. Our view of the future was that it was either going to be difficult or very, very difficult.”25

The effects of the downturn in the automobile industry reverberated throughout the supply chains of companies that supplied it and at dealers across America. Many hundreds of thousands of jobs were abruptly lost across the economy.

The direct impact was felt less in other developed countries because so much of the price at the pump is actually tax. Many European governments use gasoline stations as subbranches of their treasuries. Thus while government tax on gasoline averages 40 cents in the United States, it is more like $4.60 a gallon in Germany. Thus a doubling in the price of crude oil would only raise the retail price in Germany by a fraction of what it would in the United States.

Many developing countries subsidize retail fuel prices; oil-exporting countries, generously so. To allow prices to rise would mean social turmoil and perhaps strikes and riots. Thus these governments had to absorb the growing gap between the world price for oil and the prices that their citizens paid. Subsidies cost India’s government about $21 billion in 2009.26



SOVEREIGN WEALTH

When it was all added up, these high prices transferred a great deal of income from consuming countries to producing countries. The total oil revenues of the OPEC countries rose from $243 billion in 2004 to $693 billion in 2007. Halfway through 2008, it looked as though it could reach $1.3 trillion.

What were they going to do with all this money? Part of the answer was embodied in the initials SWF, shorthand for “sovereign wealth funds.” These were essentially government bank accounts and investment accounts set up to receive oil and gas revenues that would be kept separate from the national budget. For some countries, they were cast as stabilization funds to be held for “rainy days.” Some funds were explicitly created to prevent inflation and the Dutch disease that can result from a resource boom. These funds transformed oil and gas earnings into diversified portfolios of stocks, bonds, real estate, and direct investment.

But with oil prices rising to such heights, they had become truly giant pools of capital, swollen with tens of billions of dollars of unanticipated inflows, and now with tremendous financial capacity that would have far-reaching impact on the global economy. They faced their own particular quandary—how to invest all these additional revenues in a timely and prudent fashion. But the flip side was that their expansion meant a very large reduction of spending power in the oil-importing countries, which contributed to the downturn.



THE PEAK

Still the spell held. On July 11, 2008, oil reached its historic peak of $147.27—many times higher than the $22-to-$28 band that had been assumed to be the “natural price” for oil only four years earlier. The headlines told of more economic troubles ahead: Then something did happen. “Shortly after 10 a.m., as Mr. Bernanke was speaking to Congress,” the New York Times reported on July 16, “investors did a double-take as oil prices, previously trading at record highs, suddenly plunged.” But, said the oil bulls, it was “only a minor meltdown.”27

And then the fever broke. Demand for oil was going down in response to the higher prices. And now it was going down for another reason too. The world economy was clearly beginning to slow. The United States was already in a recession. In China’s Guandong Province, the new workshop of the world, orders were drying up, exports were declining, and workers were being laid off. Even electricity demand in that formerly booming province was declining. That was a message with global implications, for it meant that world trade was contracting. And the world’s financial system was beginning to shudder and shake, the spasms of a coming cataclysm. Financial investors began ditching “risky” assets such as equities and oil and other commodities.

In September 2008 came the decisive event. The venerable Lehman Brothers, the fourth-largest U.S. investment bank, 158 years old, failed. No one came riding to its rescue. The insurance behemoth AIG looked as though it might go down the very next day; the Federal Reserve stepped in to save it at the last moment.



“A COLD WIND FROM NOWHERE”

In the aftermath of the Lehman collapse, the world’s financial system simply froze up. Finance stopped flowing, whether to fund the daily operations of major companies or to provide the lubricant for trade. The Great Depression of the early 1930s, which had seemed to belong distantly in history, something that happened a very long time ago, now seemed to have happened only yesterday. History books and economic texts were hurriedly scoured for immediate and urgent lessons on how to rescue a failing banking system. The crisis was turning into a global panic of the sort that had not been seen for many decades. The impact on what had been healthy economies, including the BRICs, was, as Federal Reserve Chairman Ben Bernanke said, like “a cold wind from nowhere.”


The Quest

CRUDE OIL PRICES

Source: IHS CERA


The Quest

U.S. GASOLINE PRICES

Source: IHS CERA


In the midst of what would become known as the Great Recession, demand for oil continued to decline while supplies continued to build up. Yet even in the week that Lehman collapsed, a prediction of “$500 per barrel oil” managed to make its way prominently onto the cover of a leading business magazine. At that moment, however, oil was heading down, and precipitously so. Before the year was out, as the tanks at Cushing, Oklahoma, ran out of storage space and crude backed up in the system, the price of WTI fell to as low as $32 a barrel.

Even though prices subsequently recovered, the spell had been shattered.


For some, prudence paid off, when prices came tumbling down. Indeed, there is no better example of the value to an oil producer of hedging its production forward than the sovereign nation of Mexico. Its government is very vulnerable to the price of oil, as about 35 percent of its total revenues are generated by Pemex, the state company. A sudden fall in the price of oil can create budgetary and social turmoil. For years, Mexico had been hedging part of its oil output. In 2008 Mexico went all out and hedged its entire oil exports and locked in a price. It was not cheap; the cost of this insurance was $1. 5 billion. But when the price plummeted, Mexico made an $8 billion profit on its hedge, thus preserving $8 billion for its budget that, without the hedge, would have otherwise disappeared. It could only have done that huge trade over the counter. If it had tried to do it on the futures market itself, the scale would have set off a scramble by other market participants before Mexico could even begin to get all of its hedges in place.

That transaction was, on Mexico’s part, an act of prudence but also audacity. On the basis of the transaction’s success, Mexico’s finance minister received a unique honor—he was dubbed the “world’s most successful, but worst paid, oil manager.”28


How much of what happened in the oil market can be ascribed to the fundamentals, to what was happening in the physical market, and how much to financialization and what was happening in the financial markets? In truth, there is no sharp dividing line. Price is shaped by what happens both in the physical and financial markets.29

A couple of years later, Robert Shiller, who had become prominent for calling the Internet stock bubble and then the real estate bubble, was having breakfast in the restaurant of the Study, a new hotel on Chapel Street in New Haven, before walking over to lecture in his famous Yale class on financial markets. By then, with recovery well along in the global economy, the price of oil had more than doubled from its lows back to a range of $70 to $80 a barrel. Handed a piece of paper, Shiller looked carefully at what it showed—a plot depicting the movement of oil prices since 2000 culminating in the sharp ascent to its peak in mid-2008, and then its precipitous fall. It was superimposed on a plot of stock prices that culminated in the market boom that went bust in 2000. The fit was very tight. The two curves looked very similar. But the steep, bell shape of the curve instantaneously reminded Shiller of something else as well.

“That looks very much like what happened with real estate prices,” he said. A bubble.30

The rise in oil prices had not begun as a bubble. For the price had been driven by powerful fundamentals of supply and demand; by the demand shock arising from unexpectedly strong global growth and major changes in the world economy, led by China and India; and by geopolitics and the aggregate disruption. But it was a bubble before it was over.


9

CHINA’S RISE

It was one of those sharp, cold nights in Beijing when the smell of burning, crisp and a little sweet, wafted through the dark. This was the very end of the 1990s when the swelling hordes of cars were beginning to fill the new eight-lane highways and push the bicycles to the side. The burning still mainly came not from the cars but rather from the many hundreds of thousands of oldfashioned coal ovens throughout the city that people were still using to cook and heat their homes.

The dinner had gone on for a long time in the China Club, once the home of a merchant, and then a favorite restaurant of Deng Xiaoping, who had launched China’s great reforms at the end of the 1970s. Coal may have been in the air that night, but oil was on the agenda. With the dinner over, the CEO of one of China’s state-owned oil companies had stepped out into the enclosed courtyard with the other guests. Everybody’s overcoats were buttoned to the top against the cold. He and his management team were facing something he would never have anticipated when he started as a geologist in western China, more than three decades ago. For now they were charged with taking a significant part of China’s oil and gas industry—built to serve the command-and-control centrally planned economy of Mao Tse-tung—and turning it into a competitive company that would meet the listing requirements for an IPO on the New York Stock Exchange.

The reasons for this sharp break with the past were clear—the specter of China’s future oil requirement and the challenge of how to meet it—although that evening they could not visualize how rapidly consumption would grow. As the group paused in the courtyard outside the restaurant, the CEO was asked a pertinent question: Why go to all the trouble of becoming a public company? For then the management would be responsible not only to the senior authorities in Beijing but also to young analysts and money managers in New York City and London, and in Singapore and Hong Kong , all of whom would scrutinize and pass judgment on strategies, expenses, and profitability—and on the job they all were doing.

It wasn’t at all obvious that the CEO relished such an “opportunity.” But he replied, “We have no choice. If we are going to reform, we have to benchmark ourselves against the world economy.”

That was still the time when China was moving from being a minor player in the world oil market to something more, although how much more was not at all clear. What was clear, however, was that China was fast integrating with the world economy and beginning to transition to a new and far larger role in it.

Over the years that followed, these changes would transform calculations about the world economy and the global balance of power. Would all of this mean a more interdependent world? Or, people would ask in the years to come, would it lead to intensified commercial competition, petro-rivalry, and a growing risk of a clash of nations over access to resources and over the sea-lanes through which those resources are borne?



“CHINA RISK”

None of these questions were much in the air that night, on the very eve of the new century, at least in terms of energy. Indeed, at that moment, the prospects for the IPOs of the three state-owned companies looked, at best, quite problematic, and even somewhat dubious.

The IPO for PetroChina, the new subsidiary of China National Petroleum Corporation (CNPC), the largest of the companies, would be the first one successfully out of the gate. But getting ready for the IPO was proving harder than might have been imagined. Financial accounts that could satisfy the requirements of the U.S. Securities and Exchange Commission had to be carved out and formulated from the undigested, confusing, and poorly organized data of a vast Chinese state organization that had never had to pay attention to any such metrics—and certainly never had any reason to heed to the U.S. agency that regulated the New York Stock Exchange. Management knew that a whole new set of values and norms had to be inculcated into the organization. Add to this the fact that some of the company’s overseas investments were generating protests, and the picture became exceedingly unclear. It took a long prospectus—384 pages—to spell out all the risks.1

For their part, the international investors in the United States and Britain, and even those closer to China, in Singapore and Hong Kong, were skeptical. They worried about the China risk—uncertainty about the political stability and economic growth of the country. Also, this was an oil company at a time when the new economy—the Internet and Internet stocks—was booming. By contrast, the oil business was seen as quintessential old economy—stagnant, uninteresting, and stuck in what was thought to be the doldrums of permanent overcapacity and low prices.

As 2000 began, the appetite of global investors appeared tepid. The IPO was scaled back, substantially. But, finally, in April 2000 it went forward, though just barely, and PetroChina was launched as a public entity, partly owned by international investors but still majority owned by CNPC.

Over the next year, it was followed by the IPOs of the other two companies also cut from the once-monolithic ministries—Sinopec (the China Petroleum and Chemical Company) and CNOOC (China National Offshore Oil Corporation). They received the same tepid welcome. But as the years went on, the skepticism among investors disappeared, and with good reason. A decade after its IPO, PetroChina’s market capitalization had increased almost seventy times over. Its market value by that point was greater than that of Royal Dutch Shell, which is a century older, greater than Walmart’s, and was second only to ExxonMobil.

That increase in value calibrates the growing importance of the People’s Republic of China (PRC) in the balances of world energy and the rise of China itself. Since reforms began in 1979, more than 600 million Chinese people have been lifted out of gripping poverty, with as many as 300 million people in the middle-income level. Over that same time, China’s economy has grown more than fifteenfold. By 2010 it had overtaken Japan’s to become the second-largest economy in the world.2



“THE BUILD-OUT OF CHINA”

This great economic expansion has changed China’s oil position. Two decades ago China was not only self-sufficient in oil but an actual exporter of petroleum. Today it imports about half of its oil, and that share will go up as demand increases. The People’s Republic of China is now the second-largest oil consumer in the world, behind only the United States. Between 2000 and 2010, its petroleum consumption more than doubled. All this reflects what happens when the economy of a nation of 1.3 billion expands at 9 or 10 or 11 percent a year—year after year after year.

As China continues to grow, so will its oil demand. Sometime around 2020 it could pull ahead of the United States as the world’s largest oil consumer. It is an almost inevitable result of what can be described as the “great build-out of China”—urbanization at a speed and scale the world has never seen, massive investment in new infrastructure, and mass construction of buildings, power plants, roads, and high-speed rail lines—all of it reshaping China’s economy and society.

This build-out of China over the next two or three decades will be one of the defining forces not just for China but for the world economy. It is certainly one of the main explanations for a long-lasting boom in commodities. China’s urban population is growing very fast. In 1978 the country was only 18 percent urbanized. Today it is almost 50 percent urbanized, with more than 170 cities over a million people, and a number of megacities with populations exceeding 10 million. Every year another 20 million or so Chinese move from the countryside looking for work and housing and a higher standard of living. Asked by George W. Bush what worry kept him up at night, President Hu Jintao said that his biggest concern was “creating 25 million new jobs a year.” That was the basic requirement for both development and social stability.3

As a result of this build-out, the country has become a vast construction site for homes and factories and offices and public services, requiring not only more energy but also more commodities of all kinds—a seemingly endless demand for concrete, steel, and copper wiring. An expansion on this scale will likely mean real estate booms and bubbles and busts. It is only when it is largely finished and China, mainly urbanized, sometime in the 2030s and 2040s, that the tempo of demand will slow.

All this growth, all this new construction, all these new factories, all these new apartments and their new appliances, and all the transportation that comes with this—all of it depends upon energy. This is on top of the huge energy requirements of all the factories that make China the world’s leading manufacturing country and supplier of goods to the global economy. It all adds up—more coal, more oil, more natural gas, more nuclear power, more renewables. Today coal remains the backbone of China’s energy. But in terms of the relationship with international markets and the world economy, the dominating factor is oil.



GROWTH AND ANXIETY

China’s rapid growth in oil demand generates great anxiety, both for China and for the rest of the world. For Chinese oil companies, and the government, assuring sufficient oil supplies is a national imperative. It is crucial to Beijing’s vision of energy security—guaranteeing that shortages of energy do not constrain the economic growth that is required to reduce poverty and tamp down the social and political turbulence that could otherwise ensue in such a fast-changing society. At the same time, a sharp awareness has developed that rising energy demand must be balanced with greater environmental protections.

In other countries, some fear that the Chinese companies, in their quest for oil, could preempt future supplies around the world—and deny access to other countries. Some also worry that the inevitable growth in Chinese demand, along with that of other fast-growing emerging markets, will put unbearable and unsustainable pressure on world oil supplies—leading to global shortage.

These anxieties suddenly burst into view in 2004—the year of the global demand shock, when world oil consumption grew in a single year by what normally would have been the growth over two and a half years. The surge in Chinese consumption was one of the central elements in the jump in demand.

The demand shock forced perceptions to catch up with a fundamental reality. Until then, many had seen China mainly as a low-cost competitor, a manufacturer of cheap goods, a challenge to wages in industrial countries, and the supplier for the shelves in Walmart and Target and other discount stores around the world. China, with its low costs, had become the Great Inflation Lid, giving central bankers the comfort to allow faster economic growth than they otherwise would feel safe doing.

But now one also had to look at China as a market of decisive importance, with the heft to significantly affect the supply and demand—and, therefore, the price—of oil, along with other commodities and all sorts of other goods. Until 2004 it would never have occurred to motorists in the United States or Europe that the prices they paid at the pump could be so strongly influenced by bottlenecks in coal supplies and shortages of electricity in China that would force a sudden switch to oil. And it certainly would never have occurred to the management of General Motors, the prototypical American car company, that within just a few years it would be selling more new cars in China than the United States. But such is the new reality of today’s global economy. This is also true for trade in general. China is the biggest export market for countries like Brazil and Chile—not necessarily surprising for countries that export commodities. For countries like Germany, China is now also a key export market.

For the oil market, there is only one meaningful analogy for China’s rapidly growing importance. It was the massive growth in petroleum demand—and imports—in Europe and Japan in the 1950s and 1960s that resulted from the rapid economic growth during the years of their economic miracles. That growth in demand certainly had a transformative impact on the world energy scene and on global politics.

But there is a risk around this change in the balance in the world oil market: that commercial competition could turn into a national rivalry that gets cast in terms of “threats” and “security,” disrupting the working relationships that the world economy requires. As always in international relations, the danger is that miscalculation and miscommunication can in turn escalate security “risks” into something more serious—confrontation and conflict.

This emphasizes the importance of not recasting commercial competition into petro-rivalry and a contest of nation-states. After all, change is inevitable as a result of China’s rapidly growing economy and from the new balance that will inevitably result. Moreover, the global oil and gas markets do not exist in a vacuum. They are part of a much larger and ever more dense network of economic linkages and connections, including huge trade and financial and investment flows—and, indeed, flows of people. These connections, of course, generate their own tensions, particularly around trade and currencies. Yet overall, the mutual benefits and common interests much outweigh the points of conflict.

Whatever the tensions today, this degree of integration and collaboration would have been inconceivable in the earlier era of confrontation, when Mao proclaimed that “the east is red” and the Bamboo Curtain closed off China from the rest of the world.



“POOR IN OIL”

On a Sunday night, from the top floor of the China World Hotel, one looks down at an endless stream of headlights, gliding in multiple streams, from the four lanes in each direction of Chang’an Avenue, Beijing’s most important road, onto the elevated Third Ring Road expressway, which is constantly at capacity. This is the new China. Satisfying these streams of demand is part of China’s preoccupation when it comes to oil.

There was no way that Zhou Qingzu, the venerable chief economist of China National Petroleum Company, could have imagined the panorama he was watching, twenty floors down, when he joined the oil industry as a geologist in 1952. At that time, China’s entire production was less than 3,500 barrels a day. As his first assignment, he was sent to China’s far west to join an early exploration effort. He was one of just a small handful of geologists going into an industry whose prospects were hardly promising. Decades earlier, after World War I, a Stanford University professor had delivered what had been taken as the definitive verdict: “China will never produce large quantities of oil.” The meager experience of the succeeding decades seemed to bear out that conclusion.

Yet after the Second World War, no one could doubt that oil was essential for a modern economy—and for military might and political power. But China had virtually no oil of its own and had to depend on imports to meet its needs. Following the victory of Mao Tse-tung’s communist revolution in 1949, the United States sought to limit Western oil exports to China and then, after the outbreak of the Korean War, to cut them off altogether, which constrained Chinese military operations during the war. “Self-reliance” became an urgent imperative, and Mao’s five-year plans made the development of the oil industry a very high priority. Despite disappointing results from exploration, the Chinese leadership simply refused to accept that China was “poor in oil.”

The Chinese Revolution did have one asset on which to draw in the search for oil—its fraternal relations with its communist brethren, the Soviet Union, which was a large oil producer. “We were just getting started,” recalled Zhou. “Our major teachers were the Russians. We called the Russians ‘our big brothers.’ ” The Soviets sent experts, equipment, technology, and financial aid to China, and a whole generation of young Chinese went off in the other direction, to Moscow, to be trained in petroleum.4

Some new fields were developed in the remote west, with Soviet help, but the overall results, as Zhou found from personal experience, were almost negligible. Pessimism was so rife that some Chinese experts thought the country should turn to synthetic oil, making petroleum from its abundant coal resources, as the Germans had done during the Second World War.



DAQING : THE “GREAT CELEBRATION”

But then, unexpectedly, in the grasslands of the northeast, in Manchuria, a vast new oil field was found. It was called Daqing—which means “Great Celebration.”

The development of the field, arduous as it was, became even more difficult when the “brotherhood” with the Soviet Union splintered and the two countries became bitter rivals for leadership of the communist world. Moscow abruptly pulled out its people and equipment, and demanded repayment of debts. Mao repaid the Soviets in vituperation, denouncing them as “renegades and scabs . . . slaves and accomplices of imperialism, false friends and double-dealers.”

The Chinese were now on their own for Daqing. No modern technology. No nearby urban areas. No housing. Thousands and thousands of oil field workers were hastily dispatched like troops in a military campaign. Despite the harsh cold, they slept in tents or huts or holes in the ground or just out in the open; they used candles and bonfires for light and heat; they scrounged the countryside for wild vegetables. Operations were headquartered in cattle sheds. And they worked terribly hard. To make matters worse, the Soviets reduced their oil exports to China. “Once imports are cut off, airplanes could be forced to stop flying,” warned one senior official, “certain combat vehicles could be forced to stop operating.” He added, “We should not rely on imports again.” From then on, self-sufficiency and the determination represented by the “Spirit of Daqing” became the guiding principles of China’s oil development.5



“IRON MAN” WANG

The embodiment of the Spirit of Daqing became a driller named Wang Jinxi. He achieved fame across China as the “Iron Man of Daqing oilfield” and was celebrated as the “national model worker.” According to legend, when Wang had once visited Beijing , he had seen buses with large units on top that burned coal to make gas to power the vehicles. To Wang , this clear evidence of China’s shortage of oil was an outrage. “I simply want to now open the earth with my fist,” he declared, “to let the black oil gush out and dump our backwardness in petroleum into the Pacific.”

Wang’s team drilled at a furious rate. Wang himself would not be stayed. After one injury, it is said, he crept out of the hospital and went back to the drilling site, where he directed operations from his crutches. In his most famous exploit, in order to prevent a blowout that would have destroyed the drilling rig , he ordered bags of cement to be poured into a pit. Since there was no mixer, Wang jumped in and mixed the cement with his legs, forestalling the blowout and further injuring himself. Following the success of Daqing, Premier Zhou En Lai welcomed Iron Man Wang and his fellow Daqing workers to Beijing as national heroes. Mao himself declared that Chinese industry should “learn from the Daqing oil field.”

Many other fields followed, the pace pushed by a famous oil minister and later vice premier, Kang Shien. China succeeded in becoming self-sufficient in petroleum, which, the People’s Daily announced, had “blown the theory of oil scarcity in China sky high.” Another publication declared that, “The so-called theory that China is poor in oil only serves the U.S. imperialist policy of aggression and plunder.” The United States was not the only antagonist. The victory in the oil campaign was also hailed as a fusillade against “the Soviet revisionist renegade clique.”6



RED GUARDS

In the mid-1960s, Mao recognized that he was being pushed aside because of the dismal failure of his disastrous economic policy, the Great Leap Forward, which had caused an estimated 30 million people to die from starvation. In 1966 he counterattacked and declared war on the Communist Party itself, charging that it had been captured by renegades with “bourgeois mentality.” To carry out his “Cultural Revolution,” Mao mobilized youthful zealots, the Red Guards, who waged a vicious battle against all the institutions of society, whether enterprises, government bureaus, universities, or the party itself. Prominent figures were humiliated, paraded around with donkey heads, beaten up, sent to do manual labor, or killed. Universities closed, and young people were dispatched to factories or the countryside to toil with the masses. The nation was in turmoil.7

But because of the oil industry’s importance to national security, Premier Zhou En Lai took it under his personal protection, using the army to insulate the industry and ensure that it kept working. This led to notable incongruities. “During the day, I organized production as usual,” recalled Zhou Qingzu, the chief economist at CNPC. “At night, I would sit in front of the students and workers and say I was wrong and apologize and write out my errors and apologies. I would listen very attentively to their criticism and write notes. During the day, I was a boss. At night, I was a nobody.”8

Eventually the Cultural Revolution went too far even for Mao, in terms of the chaos it had created, and he used the army to throttle back the Red Guards.



“EXPORT AS MUCH OIL AS WE CAN”

Henry Kissinger, President Nixon’s special assistant for national security, fell ill during a dinner in his honor in Pakistan in July 1971. Pakistan’s president, the dinner’s host, strenuously suggested that Kissinger, in order to escape the heat and thus speed his recovery, should recuperate in an estate up in the much cooler hills. This was very definitely a diplomatic illness. The supposed trip to the hills was a ruse, to provide cover for Kissinger’s real purpose. Meanwhile, Kissinger himself—now code-named “Principal Traveller”—was given a hat and sunglasses to disguise himself at the airport prior to taking off for his actual destination, although the disguise might have seemed a little excessive since it was 4 a.m. in the morning.9

Only a week later did the sensational news break. From Pakistan, Kissinger had flown secretly over the Himalayas to Beijing, creating an opening in the Bamboo Curtain that had surrounded China since the communist victory in 1949. Half a year later, President Richard Nixon went through that opening. In the course of his historic visit to Beijing , Nixon supped with Mao, clinked glasses with Zhou En Lai, and dramatically reset the table of international relations.

For both sides it was a matter of realpolitik. The United States, looking for a way out of the stalemated Vietnam War, wanted to create a balance against the Soviet Union. For China, this was a means to strengthen its strategic position against the Soviet Union and reduce the risk of a “two-front war” with the Soviet Union and the United States. This was no mere theoretical matter, for Russian and Chinese military forces had already clashed on the border along the Amur and Ussuri rivers.

The Chinese had a second set of reasons as well. The most virulent phase of the Cultural Revolution was over. Vice Premier Deng Xiaoping and others were trying to get the country working again. They knew that self-reliance could not work. China needed access to international technology and equipment to modernize the economy and restore economic growth. But a very big obstacle stood in the way: How to pay for such imports?

“Petroleum export–led growth”—that was Deng’s answer. “To import, we must export,” he said in 1975. “The first to my mind is oil.” The country must “export as much oil as we can. We may obtain in return many good things.”

By this time, Deng was already becoming the manager-in-chief of the new strategy of opening toward the world. A stalwart communist since his student and worker days in France after World War I, he had emerged as one of the top leaders after the communists came to power. He then became one of the foremost targets of the Cultural Revolution and of his leftist rivals. His family had suffered much; his son had been pushed out of an upper-floor window and left paralyzed. Deng himself had spent those years variously working in a tractor repair shop and by himself, in solitary confinement. He had spent many hours pacing his courtyard, asking himself what had gone so wrong under Mao and how China’s economy could be restored. In some ways, he had always been a pragmatist. (Even while organizing underground communist activities in France after World War I, he had also started and run a successful Chinese restaurant.) The traumas of the Cultural Revolution—national and personal—only reinforced his pragmatism and realism. His fundamental mottos were about being practical—“crossing the river by touching the stones”—and the most famous maxim of all: that he didn’t care whether a cat was black or white so long as it caught mice.10



Following Mao’s death and after a brief struggle with the radical “Gang of Four,” Deng secured his position as paramount leader. He could now initiate the great transformation that would lead to China’s integration with the global economy—which the 11th Congress of the Communist Party, in 1978, would proclaim as the historic policy of “reform and opening.”

The oil industry was central to the opening. By that time, China—no longer “poor in oil”—was producing petroleum in excess of its own needs and could start exporting it. There was a waiting market nearby—Japan—which wanted to reduce its reliance on the Middle East and, at the same time, develop export markets in China for its own manufactures. Buying Chinese oil would help on both counts.

As the door began to open to the outside world, the Chinese oil industry discovered, to its shock, how wide was the technology gap that separated it from the international industry. But now, bolstered by its oil-export earnings, it could buy from abroad the drilling rigs, seismic capabilities, and other equipment that would lift its technical abilities.

While Mao’s death and Deng’s ascension were critical to the opening of China, those events did not put an end to the turmoil. Inflation, corruption, and inequality emboldened opponents of reform. So did the bloody 1989 confrontation with students in Tiananmen Square. In the aftermath, amid the indecision of the leadership, the efforts to continue market reform stagnated. Seeking to jump-start the faltering reforms, Deng, in January 1992, launched his last great campaign—the nanxun, or “southern journey.” This trip showcased the booming Special Economic Zone of Shenzhen, which was becoming a manufacturing center for exports, and sought, fundamentally, to erase the stigma from making money. His message was that “the only thing that mattered is developing the economy.” It was during this tour that Deng also made a stunning revelation—he had never actually read the bible of communism, Karl Marx’s Das Kapital. He never had the time, he said. He had been too busy.11



WORKSHOP OF THE WORLD

In the years after Deng’s “southern journey,” China consolidated its course of reform and moved toward integration with the global economy. The 1990s was a decade of a new, much more interconnected economy. On January 1, 1995, the World Trade Organization was established to bring down barriers and facilitate global trade and investment. World trade was growing much faster than the global economy itself. American and European companies were setting up supply chains that gathered components from different parts of the world, assembled them in still other parts, and then packed the finished goods into containers and shipped them across oceans to customers anywhere in the world. Although China did not formally join the WTO until 2001, it had by then already become the linchpin in this new system of global supply chains.

As factories went up all along the coastal region, the inscription “Made in China” became ubiquitous on all sorts of products shipped all over the world. China had now become what was said of Britain two centuries earlier—“the workshop of the world.” In due course, these new trade and investment linkages would have much greater impact on world energy than anyone might have imagined. For any workshop needs energy on which to run, and this new workshop of the world would run on fossil fuels.



THE END OF SELF-SUFFICIENCY

Already, however, a few years earlier, China had crossed a great divide in terms of energy. By 1993 petroleum production could no longer keep up with the rising domestic demand of the rapidly growing economy. As a result, China went from being an oil exporter to an oil importer. Though not at first noticed by the rest of the world, it was for China an immediate shock. “The government thought it was a disaster,” remembered one Chinese oil expert. “It was very negatively received. From an industry point of view, we felt very shamed. It was a loss of face. We couldn’t supply our own economy. But some scholars and experts told us, ‘You can’t be self-sufficient in everything. You import some things, and export others.’ ”12

This added greatly to the urgency to further modernize the structure of the oil industry—to move from the all-encompassing ministries of the petroleum and chemical industries, based on rigid central planning, to a system based on companies and rooted in the marketplace. The foundation for this shift had already been laid in the 1980s. The three state-owned companies had emerged from the ministries: the China National Petroleum Company, CNPC; Sinopec, the China Petrochemical Corporation; and CNOOC, the China National Offshore Oil Company. The next move, beginning in the late 1990s, was to dramatically restructure the three companies into more modern, technologically advanced companies—and more independent enterprises. “They would need to earn a living,” said Zhou Qingzu. It was at this point that they would go through IPOs, opening partial ownership to shareholders around the world. CNPC’s subsidiary was given a new name—PetroChina—while Sinopec and CNOOC used their existing names for their listed subsidiaries. There was also an enormous cultural change. “Now you’d have to be competitive,” said Zhou. “You never had to be competitive before.”13



THE “GO OUT” STRATEGY: USING TWO LEGS TO WALK

China has become a growing presence in the global oil and natural gas industry. This new role goes by the name of the “go out” strategy. It was enunciated as policy around 2000, though the policy’s roots extend back to the original reforms of Deng Xiaoping.

The first steps abroad were very small ones, beginning in Canada, then Thailand, Papua–New Guinea, and Indonesia. In the mid-1990s, CNPC acquired a virtually abandoned oil field in Peru. By applying the kind of intense recovery techniques it had honed to coax more oil out of complex older oil fields in China, it took the field from 600 barrels a day to 7,000. But these projects were small and did not get much attention. It took time and experience for the confidence to build for significant international activities. “We knew that, from its beginning in the mid-nineteenth century, the oil industry was always an international industry,” said Zhou Jiping, the president of PetroChina. “If you wanted to become an international oil company in the real sense, you had to go out.” By the beginning of the new century, a policy consensus had formed around the idea of international expansion, along with confidence in the capabilities of the Chinese companies to implement it.14

In general, the “go out” phase meant the internationalization of Chinese firms—that they should become competitive international companies with access both to the raw materials required by the rapidly growing economy and to the markets into which to sell their manufactures. For energy companies more specifically, it meant that the partly state-owned, partly privatized oil companies should own, develop, control, or invest in foreign sources of oil and natural gas. For the oil industry, this was complemented by another slogan—“using two legs to walk”—one, to further development of the domestic industry; the other, for international expansion.

Today the impact of the “go out” strategy is evident worldwide. Chinese oil companies are active throughout Africa and Latin America (as are Chinese companies from other sectors). Closer to home they have acquired significant petroleum assets in neighboring Kazakhstan and have achieved some positions in Russia after repeated tries. They are developing natural gas in Turkmenistan. As latecomers into the international industry, the Chinese come equipped not only with oil field skills but a willingness and the financial resources to pay a premium to get into the game. Also, particularly in Africa, they make themselves partners of choice with very significant “value added.” That is, they bring government-funded development packages—helping to build railroads, harbors, and roads—something that is not in the tool kit of traditional Western companies. This has engendered controversy. Critics charge that China is colonizing Africa and using Chinese rather than local labor. Chinese reply that they are doing much to create markets for African commodity exports, and that export earnings are better than foreign aid and do more to stimulate lasting economic growth. (Some of these packages have fallen apart.) Chinese banks, in coordination with the Chinese oil companies, have also made multibillion-dollar loans to a number of countries that will be paid back in the form of oil or gas over a number of years. (One such deal took fifteen years to work out. )15

The energy security strategy is also taking an obvious route—building pipelines to diversify, reduce dependence on sea-lanes, and strengthen connections with supplier countries. A new set of pipelines, built in record time, brings oil and gas from Turkmenistan and Kazakhstan to China. Russia’s $22 billion East Siberia–Pacific Ocean Pipeline will, in addition to supplying oil to the Pacific (Japan and Korea primarily), also deliver Russian oil to China—guaranteed by a $25 billion loan that China advanced to Russia. In September 2010 Chinese president Hu Jintao and Russian president Dmitry Medvedev jointly pushed the button to start the flow of oil over the Russian-Chinese border. The potential for a large trade in natural gas was also hailed. At the ceremony, Hu proclaimed a “new start” in Chinese-Russian relations. A relationship that was once based upon Marx and Lenin was now rooted in oil and possibly gas. 16



“LIKE THROWING A MATCH”

But the greatest controversy over the “go out” strategy came not in Africa but in the United States. In 2005 Chevron and CNOOC—Chinese National Offshore Oil Corporation—were locked in a battle royal to acquire the large U.S. independent company Unocal, which had significant oil and gas production in Thailand and Indonesia but also had some in the Gulf of Mexico. The competition between the two companies was very tough, with sharp arguments about the financial terms and the role of Chinese financial institutions, as well as the timing of the respective offers. For some in Beijing, a global takeover battle was not only unfamiliar but disconcerting. The price that CNOOC put on the table was greater than the entire cost of the huge Three Gorges Dam project, which had taken decades to build. After months of battle, Chevron emerged victorious with a $17.3 billion bid.

But in the course of takeover battle, a fiery political controversy erupted in Washington that was out of scale compared with the issues. After all, Unocal’s entire production in the United States amounted to just 1 percent of the total U.S. output. Much of it was in the Gulf of Mexico, in joint ventures with other companies, and the only market for that output was the United States. Yet when the contest got to Washington, as one of the American participants said, it was “like throwing a match into a room filled with gasoline.” For it became the focus of a firestorm of anti-Chinese sentiment on Capitol Hill that was already supercharged by the contentious hot-button issues of trade, currencies, and jobs. The heated rhetoric showed the intensity, at least in some quarters, of suspicions of China’s motives and methods. One critic told a congressional committee that CNOOC’s bid fit “into a pattern” of “activity around the globe” that is “ominous in its implication.” Another charged that CNOOC’s bid was part of China’s strategy for “domination of energy markets and of the Western Pacific.” Whatever the specifics of the takeover battle, the takeaway for the Chinese at the end of the political battle was that the United States itself was less hospitable to the openness toward foreign investment that it preached to others and that the Chinese companies should redouble their investment effort—but elsewhere. “The world was shocked that a Chinese company could make this kind of bid,” said Fu Chengyu, at the time the CEO of CNOOC. “The West was saying that China is changing in terms of such things as building highways. But it was not paying attention to China itself and how China had changed.”

In the years that followed, the changes became much more evident. China’s president made highly visible state visits to a number of oil and commodityexporting countries in the Middle East and Africa, beginning with Saudi Arabia. And when China convened a summit of African presidents to discuss economic cooperation, 48 of the presidents made the trip. “China should buy from Africa and Africa should buy from China,” said Ghana’s president. “I’m talking about the win-win.”

The world moves on. In 2010, five years after the fiery battle over Unocal, Chevron and CNOOC announced that they were teaming up to explore for oil not in the Gulf of Mexico but in the waters off China. “We welcome the opportunity to partner with CNOOC,” said a senior Chevron executive. 17



“INOCs”

In the decade-plus since the shaky days of the original IPOs, the Chinese companies have become highly visible players in the world oil market.

Their international roles have instigated a vigorous debate outside China as to what drives them. One agenda is established by the government (which remains the majority shareholder) and the party, both of which maintain oversight. They are to meet national objectives in terms of energy, economic development, and foreign policy. The CEOs of the major companies also hold vice ministerial government rank—and many also hold senior party rank.

At the same time, the companies are driven by strong commercial, competitive objectives that are similar to those of other international oil companies, and, increasingly, their commercial identities define them. They are indeed benchmarked against the world economy and other international oil companies by the investors in their listed subsidiaries, and they have to be responsive to their investors’ interests. In addition, they are subject to international regulation and international governance standards. And they manage large and complex businesses that, increasingly, are operating on a global scale.

As Zhou Jiping put it, “As a national oil company, we have to meet the responsibilities of guaranteeing oil and gas supply to the domestic market. As a public company listed in New York, Hong Kong, and Shanghai, we must be responsible to our shareholders and strive to maximize shareholder value. And, of course, we have a responsibility to the 1.6 million employees of our company.”

In short, Chinese oil companies are hybrids, somewhere between the familiar “international oil companies,” IOCs, and the state-owned national oil companies, NOCs. They have become a prime example of a new category called INOCs—the international national oil companies. “There has been a great change in people’s overall psychology and philosophy since the IPO,” said the CEO of one of the companies. “We used to focus on how much we produced. Now it’s the value of what we do.”

Today walk into the headquarters of some of the companies in Beijing and what one sees are not exhorting slogans but the epitome of the international benchmarking—flashing displays of the stock price in New York, Hong Kong , and Shanghai. Yet in the lobby of CNPC, one is also greeted by a very strong reminder of how the industry was built—a massive statue of Iron Man Wang.

What is the balance in these INOCs? The Chinese companies are sometimes portrayed mainly as “instruments” of the state. A new study from the International Energy Agency concludes otherwise—that “commercial incentive is the main driver” and that they operate with “a high degree of independence” from the government. As the IEA puts it, they are “majority-owned by the government” but “they are not government-run.” As they become increasingly internationalized, they operate more like other international companies.

For all concerned, the development of the Chinese companies has been an evolution. Fu Chengyu, now the chairman of Sinopec, summed up the changes this way: “Evolving so completely from full state-ownership to join the ranks of major international corporations is a huge transformation—one that, back when I started in the oil business in the oil fields of Daqing , we never thought could be possible. Back in those days, China’s largest source of foreign exchange was not manufacturing, but in fact sales of oil to Japan! Today everything around us has changed. But so have we.”18



PROPORTION

Chinese companies will likely become bigger, more prominent players; they will certainly compete; but they will also be sharing the stage with established American, European, Middle Eastern, Russian, Asian, and Latin American companies—and often in partnerships.

For all the talk about China “preempting” world supplies, its entire overseas production is less than that of just one of the supermajor companies. It’s very hard to conceive of China ever being in a position to preempt world supplies. Moreover, while some of Chinese overseas production is shipped to China, most of it is sold into world markets at the same prices as similar grades of petroleum. Destination is determined by the best price, local and international, taking into account transportation costs. And that is all the more true of oil from joint ventures, in which much of China’s international oil is produced.

There is a further critical consideration. Chinese investment and effort in bringing more barrels to the markets contribute to stability in the global market. For were those barrels not forthcoming, the growing demand from China (and elsewhere) would add more pressure and lead to higher prices. Additional investment means more supply and adds to energy security. The Chinese oil companies are committing more capital and resources to expanding Iraq’s oil output, and taking more risk, than the companies of any other nation.

Indeed, it would be quite surprising if a country in China’s position—rapidly rising demand, rapidly growing imports, a well-established domestic industry, huge holdings of dollars—did not venture out into the rest of the world to develop new resources. Indeed, were they not doing so, they would likely be roundly criticized around the world for not investing.

Moreover, “go out” is not the sole strategy of the Chinese companies. About 75 percent of the companies’ output is within China. Altogether, China’s domestic oil production makes it the fifth-largest in the world—ahead of such large producers as Canada, Mexico, Venezuela, Kuwait, and Nigeria. Within the Chinese industry itself there is talk about the “second age of Chinese oil.” This means the application of new technologies and new approaches to the discovery and development of domestic petroleum resources, as well as a much greater focus on what are increasingly seen as abundant but undeveloped domestic resources of natural gas, including shale gas.

These are the new commercial realities—China as a growing consumer of oil, China as an increasingly important participant in the world oil industry. But there is also a security dimension, which arises from growing dependence for a country for which “self-reliance” had been such a strong imperative for so many years.


10

CHINA IN THE FAST LANE

In the late 1990s, when energy security proposals were presented to the Chinese government, they were tabled. “They said there was no energy security issue,” said a senior adviser, “and that was partly right. It was a benign market.”

But that changed as oil consumption surged, increasing the reliance on imports, and prices started their upward trek. A country that had been self-sufficient in oil as a matter of policy found itself increasingly dependent upon the global market—something that was anathema in its earlier and very different stage of development. This dependence made energy security a central concern in Beijing. As one of the country’s top officials put it, “China’s energy security issue is oil supply security.”

By 2003 a new factor had further increased the anxiety about energy security—the war in Iraq. For Beijing, it was hard to believe that the promotion of democracy in the Middle East was what propelled the United States into Iraq in March 2003. If not that, it had to be something more concrete, more urgent, more critical, more threatening. In short, it had to be oil. And if the United States was worried enough about oil to launch a full-scale invasion, then, in the view of many Chinese, energy security was clearly much more important—and urgent.1

Part of the new insecurity arose from apprehension about the sea-lanes, the economic highways for the world commerce that were increasingly important as the lifelines for Chinese oil imports—and indeed for Chinese trade in general. Half of the country’s GDP depends on sea-lanes. In November 2003, seven months after the invasion of Iraq, President Hu Jintao reportedly told a Communist Party conference that the country had to solve what became known as the Malacca Dilemma. This referred to China’s reliance on the Malacca Strait, the narrow waterway connecting the Indian Ocean and the South China Sea and through which passes more than 75 percent of China’s oil imports. “Certain powers have all along encroached on and tried to control navigation through the strait,” Hu is said to have declared. “Certain powers” was an obvious euphemism for the United States.2

The growing attention to risk was reinforced by what happened in 2004: the unanticipated jump in both Chinese and global demand for oil and the consequent rapidly rising prices. An energy problem had already become evident in China from late 2002. But initially it was a coal and electricity problem, not an oil problem. China depends on coal for 70 percent of its total energy and about 80 percent of its electricity. The economy was growing so fast that tight supplies of coal turned into outright shortages. At the same time, electric power plants and the transmission network could not keep up with the demand for power. The country simply ran out of electricity. As brownouts and blackouts hit most of the provinces, a sense of crisis gripped the country. Factories were working half days or even shutting down because of shortages of energy, while sales of diesel generators soared as desperate industrial enterprises resorted to making their own electricity. Power was so short in some parts of the country that traffic lights weren’t working, and children were back to doing their homework by candlelight. Hotels in Beijing were requested to keep room thermostats above 79 degrees Fahrenheit, and their staffs ordered to use the stairs rather than the elevators.3

Only one short-term alternative to coal was available for satisfying the accelerating energy demand—oil. That is why China’s oil demand in 2004 grew not by the anticipated 7 or 8 percent but that much higher 16 percent, requiring a rapid rise in petroleum imports. The Chinese oil companies hurriedly stepped up their efforts both to increase domestic production and to access additional supplies internationally.

Around this time, the theses about peak oil and limitations on future supply were permeating discussions in Beijing, as elsewhere in the world. The overlay of a fear of imminent and permanent shortage, which was so common in this period, added to a pervasive sense of crisis about the adequacy and availability of future supplies and whether a new rivalry would emerge.



PETRO-RIVALRY?

But what would a “new-energy-security strategy” look like? This became part of what has become a continuing debate about the possibility of a petro-rivalry between the United States and China. Some strategists in Beijing worry about China’s depending on a world oil market that they assert is unreliable, rigged against them, and in which the United States has, in their view, excessive influence. Some of them even argue that the United States has a strategy to interdict sea-borne Chinese oil imports—cut off China’s overseas “oil lifeline”—in the event of a confrontation over what has been for decades the most critical issue between the two nations, the self-governing island of Taiwan and its relationship to mainland China. They criticize the presence of the U.S. Navy in the regional seas and U.S. support for Taiwan—even as economic links between Taiwan and the People’s Republic continue to grow. Some of the military leaders denounce the United States, in the words of one admiral, as a “hegemon.”

The reverse of such fears can be found among some strategists in the United States. There are those who argue that China, driven by a voracious appetite for resources and control, has a grand strategy to project its dominance over Asia while also seeking to preempt substantial world oil supplies. China is said to be pursuing this strategy with a single-minded mercantilism, backed up by growing military power. They point, for evidence, to double-digit increases in Chinese defense spending, a rapid naval buildup, China’s pursuit of naval and aviation technology, and its potential for developing a “blue water navy” that would project naval power far beyond China’s neighborhood. Moreover, China has established a network of strategic ports, bases, and listening posts along the Indian Ocean. These critics specifically cite the development of new missiles that seem aimed directly at U.S. sea power—specifically aircraft carriers—and at upsetting the security of sea-lanes that U.S. sea power protects—security from which China, as much as any nation, directly benefits.

All this could stir up the specter of a naval arms race reminiscent of the Anglo-German naval race that did so much to inflame the tensions that ignited the First World War. Despite an extensive and growing economic relationship in the years that led up to August 1914, Britain and Germany were driven apart by rivalry and the suspicions aroused by their naval race, by anxiety over control of sea-lanes and access to resources, by competition over who would have what place in the sun—and by growing nationalistic fer vor. Echoes of the Anglo-German naval race can be heard in today’s arguments.

Controversy over the South China Sea has already created some tension between the United States and China. That sea’s 1.3 million square miles are bounded on the west by China, Vietnam (which calls the region the East Sea), and Malaysia, down to Singapore and the Strait of Malacca; and then, coming up on the east, by Indonesia, Brunei, the Philippines, and at the top, by Taiwan. Through its waters pass most of the trade between East Asia and the Middle East, Africa, and Europe—including most of the energy resources shipped to China, Japan, and South Korea. “It’s really a lifeline of our commerce, of our transport, for all of us, China, Japan, Korea, and Southeast Asia, and the countries beyond to the west,” said the secretary general of ASEAN, the association of ten Southeast Asian countries.4

In 2002 China and the ASEAN countries signed an agreement that seemed to settle rival claims. But later some Chinese military officials began to speak of China’s “undisputed sovereignty” over the South China Sea, control of which they elevated to what they called hexin liyi, a “core interest.” Others in the China foreign policy community have subsequently described the assertion of “core interest” as “reckless” and “made with no official authorization.” If China were to successfully assert such an interest, it would control the critically important merchant shipping lanes as well as be in a position to deny freedom of passage to the U.S. Navy. Not surprisingly, the ASEAN countries, as well as the United States, have rejected China’s claims. Still, to underline those claims, a Chinese submarine went down to the deepest part of the sea, where its crew planted a Chinese flag.5

Energy resources are an increasingly important part of the argument. Substantial oil and gas resources are produced around the South China Sea, notably in Indonesia, Brunei, and Malaysia. Estimates of the undiscovered oil in the South China Sea range between 150 billion and 200 billion barrels, which is more than enough to stir competition, although far from proven. Although China and Vietnam have worked out some joint-production agreements, they are at odds over ownership of other exploration areas. Particularly contentious are the Spratly Islands, whose waters are thought to be rich in resources and are claimed in whole or in part by several countries. Meanwhile, in the East China Sea, Japan and China have had a long dispute, which recurrently flares up, over sovereignty and drilling rights.

It is exactly these kinds of tensions that can fester, blow up into incidents, and lead to much more serious and disruptive consequences. That explains the urgency for finding frameworks that can meet the interests of the various nations involved.



“RESPONSIBLE STAKEHOLDERS”

While these tensions persist, China’s direct anxiety over energy security appears to have eased. Hu Jintao offered his own answer to the Malacca Dilemma when he presented, at a G8 meeting in 2006, a definition of what he called global energy security in which importing countries like the United States and China are interdependent. Energy insecurity for China, he has said, also means energy insecurity for the United States—and vice versa. Thus collaboration is one of the main answers to the dilemmas of energy security.

Part of this shift is based on China’s growing realization that it can obtain the additional energy it needs by participating in the same global economy from which it has benefited so considerably. In simple words, China can buy the energy it needs. That was not so obvious a few years ago, but experience since has shown that it is eminently feasible. This applies not only to oil but also to natural gas, the imports of which are growing. “There’s no other solution but to rely on the marketplace,” said an energy strategist in Beijing. “What’s different about exporting to America and importing energy from elsewhere? China is part of world markets.”

Moreover, China has very large coal reserves. Adding in domestically produced oil and hydropower, China is more than 80 percent self-sufficient in terms of overall energy. A sign of greater confidence is the change in the discussion about making synthetic oil from coal. This was a very high priority when oil prices were spiking and some people were predicting permanent shortage, but now the Chinese talk about its development more as an insurance policy against disruption rather than a large-scale substitution.6

An effort to reduce the tensions is evident within the larger framework of relations. It is built on the recognition of the new reality—China’s prominent place in the global economy and the world community. The administration of George W. Bush initially described China as a “strategic competitor” with all the implications that went with that. But as the years passed, a more cooperative approach emerged, based upon a mutual understanding of the interdependence. “Rising power” and “peaceful rise” were the way that senior Chinese had come to describe their country’s new role and position. Some Chinese strategists have emphasized the need to manage and ameliorate the inevitable tensions that would arise between a latecomer and an established power. For its part, the United States proffered the concept of “responsible stakeholder,” an idea first proposed by Robert Zoellick, at the time deputy secretary of state and subsequently president of the World Bank. The argument was that China could play a larger constructive role in diverse international arenas that was commensurate with its new stature. The Chinese came to interpret “responsible stakeholder” as meaning shared “international responsibilities” for the international system from which they are benefiting—and which they are helping to shape.

This new orientation has become embodied in a set of arrangements for addressing issues, defusing tensions, and fundamentally providing strategic reassurance. These include a “strategic and economic dialogue” between the two countries and an “energy and environment cooperation framework,” which was launched at the end of the Bush administration and continued by the Obama administration. China’s collaboration with the International Energy Agency and its participation in the International Energy Forum enable greater alignment and less tension on energy-security issues. On a global basis, the G7 and the G8 club of the major industrial countries now shares the stage with the G20, which expands the table to include the major developing countries, with China obviously in a very prominent position. The relevance of the G20 was made clear when it became an essential forum for coordination during the financial crisis of 2008–9.

All of this does not guarantee that competition over energy, and tensions about access and security, will not flare up and become more threatening. But it does mean that an established framework exists to handle such issues and to help prevent their escalation into something more serious. One Chinese decision maker summed up the evolution of thinking this way: “The government considers energy security very important, a first priority. But there is a change of understanding. Now we recognize that we have lots of options and choices to solve energy security issues.”7

This is all the more important as China’s oil consumption is destined to rise as it moves at record speed into the auto age.



THE FAST LANE

China is making the transition to a mass automobile culture as other countries have already done, but it is doing so at an extraordinary rate and on a scale never seen before. In the United States, oil accounts for about 40 percent of total energy consumption. In China, despite rapid demand growth, oil is only 20 percent of total energy use, and the largest part of that oil is used as fuel in industry or as diesel in trucks and farm equipment. But that is changing swiftly. As the Chinese automobile industry moves into the fast lane, the impact will be felt not only across the nation but globally.

In 1924 Henry Ford, already known worldwide for his Model T, received an unexpected letter. “I have . . . read of your remarkable work in America,” wrote China’s president Sun Yat-sen. “And I think you can do similar work in China on a much vaster and more significant scale.” He continued: “In China you have an opportunity to express your mind and ideals in the enduring form of a new industrial system.” The invitation was all the more gracious as Sun himself was highly partial to Buick, made by Ford’s great rival, General Motors. By the late 1920s, Ford Motor was already shipping cars to China and had opened a sales and service branch in that country. But Sun Yat-sen’s dream was not to be realized.

In the “new industrial system” that the triumphant Mao imposed after 1949, the automobile had virtually no place. Even as late as 1983, China produced fewer than 10,000 cars. By then, however, Mao was gone, and the creation of an automobile industry had been identified as necessary to the reforms that Deng Xiaoping was introducing. It was part of a modern society, one of the “pillars” of economic development, critical to technical advance and to creating jobs for those moving from farms into cities.

But how to do it? China was so far behind the United States and Japan in terms of technology and industrial capability, and had been so isolated, that there was no point in trying to start from scratch.

And so the answer turned out to be joint ventures. The first one, however, Beijing Jeep, never fulfilled the original hopes. Volkswagen scored the first successful joint ventures when it teamed up, beginning in the mid-1980s, with Shanghai Automotive Industry Corporation and China’s First Auto Works. Yet by 1990 China was still producing only 42,000 cars a year, and the roads still belonged to the great swarms of bicyclists. But General Motors, Toyota, and Hyundai were also establishing joint ventures, to be joined by Nissan and Honda, among others.

China’s accession to the WTO in 2001 really ignited the growth of the auto industry—fueled by the emergence of distinctly local companies with such names as Chery, Geely, Great Wall, Lifan, Chang’an, and Brilliance. As the Chinese sales grew, the other international automakers realized that they could not afford to be left out of the most dynamic automobile market in the world, and they too signed up for joint ventures.

Indeed, auto executives could now see a point on the horizon when China might actually overtake the United States as the world’s largest automobile market. It was inevitable, they said. It was just a matter of time. In 2004 General Motors predicted that it could happen as early as 2025. Some went further and said it could happen as early as 2020. Maybe even 2018. But, they would add, that would be a real stretch.

As things turned out, it happened much sooner—in 2009, amid the Great Recession. That year China, accelerating in the fast lane, not only overtook the United States but pulled into a clear lead. The massive and swift Chinese economic stimulus program targeted the automobile industry as one of the “core pillars of growth” with tax cuts on new vehicles, cash subsidies, and price reductions on some vehicles. Car sales increased 46 percent over the previous year, while in that same year U.S. sales plummeted to the lowest level since 1982. Seen in perspective, the shift in relative positions was staggering. In 2000, 17.3 million new cars were sold in the United States, compared with 1.9 in China. By 2010 only 11.5 million were sold in the United States, while China had reached 17 million. By 2020 sales in China could reach 30 million—and keep going.

AUTO NATIONS: U.S. AND CHINA


The Quest

Source: IHS Global Insight


American automakers may be struggling at home but not in the booming Chinese market. General Motors now does sell more automobiles in China than in the United States. The name Buick may not anymore exude class to American or European ears, but the black Buick Xin Shi Ji (“New Century”) luxury sedan had a powerful allure for Chinese. Buick was so dominant a brand that by the early 1930s, one out of every six cars on China’s streets was an imported Buick. Not only had Buick been Sun Yat-sen’s favorite car, but also much favored by Zhou En Lai. Indeed, when GM first started manufacturing cars in the country, the Chinese insisted that Buick be the brand name, and for several years Buick led as a luxury car. Audi, Mercedes, and BMW might have overtaken the luxury segment, but Buick still remains a stalwart in the market.8



GOING OUT—ON WHEELS

Some of the Chinese companies are already producing inexpensive automobiles that are being sold in increasing numbers into developing countries. Chinese companies, like Indian manufacturers, also have their eye on a new, potentially very big market—cars priced from $2,500 to $7,500 and aimed at the hundreds of millions of people climbing up the rungs of the income ladder.

But the specter that haunts Detroit and Tokyo and Stuttgart and the other auto cities is whether—and when—China’s auto companies (supported by local components suppliers) will reach a level of sophistication at which they can directly compete in the United States and Europe against the likes of GM, Ford, Toyota, and Daimler. Price will likely not be enough. Assuring quality and safety will also be essential. Fuel efficiency will be a criteria. They will also have to build dealer networks.

One Chinese company that has partly solved that problem is Geely, which got started in 1986 making components for refrigerators and only produced its first car in 1998. Within a decade it was one of the top domestic Chinese manufacturers. In 2010 Geely purchased Volvo from cash-strapped Ford, giving it an instant global sales and dealer network. It is not clear whether that means Geelys will eventually go into American and European showrooms. But by producing Volvos in China, Geely would have a potentially upmarket brand with which to challenge BMW and Mercedes at home.



The rapid expansion in China’s auto industry is adding many jobs and stimulating domestic consumption—two steps that China’s trading partners have, for years, been calling for. At the same time, this is causing worry among China’s leadership about adding to future oil imports as well as about the quality of life. China’s major cities are already clogged with traffic for which they were not built, and the delays and congestion—and growing pollution—embody the costs of such success. Some predict that if Beijing continues to add cars at its current rate of 2,000 vehicles a day, average speeds in the city could drop to nine miles an hour.9



THE PRICE OF SUCCESS

The abstract GDP and energy consumption numbers tell an extraordinary story. Never has the world seen so many people moving so quickly out of poverty into a world of economic growth and expanding opportunities. The scourges of hunger and malnourishment are receding rapidly. But there is an environmental price. Water is a great problem, both because of potential shortages and because of pollution from untreated waste. But it is the air that carries the burden of the rapidly growing energy consumption. Individual Chinese feel the pollution in their lungs and in their health.

CHINA’S RISE: GDP AND TOTAL ENERGY DEMAND


The Quest

Source: IHS CERA, IHS Global Insight, International Energy Agency, China National Bureau of Statistics


The major source of air pollution is coal, whether burned in individual homes for cooking and heating or used to generate electricity or burned in factories. Electricity demand is growing at about 10 percent. The rapidly growing automobile fleet is adding to the pollution in major cities. Regulations are seeking to push new cars to European levels of pollution control, but with mixed results.

Meanwhile, in recent years China has become less energy efficient, reversing a long trend. Between 1980 and 2000, China’s economy quadrupled, and its energy use only doubled. Such a record in energy efficiency was a considerable achievement. With the new century, however, the relationship suddenly reversed. Energy consumption started growing much more rapidly than the economy. From 2001 onward, a huge wave of investment stimulated enormous expansion in industry, particularly heavy industry. Many of the factories—old and new—were quite inefficient in how they used energy. As China became the workshop of the world, its energy-intensive heavy industries were operating at double-time supplying the world’s market. China, for instance, became the largest producer of steel—almost half of the world’s entire output—and the biggest exporter of steel in the world. Thus it would be correct, at least in part, to say that as Chinese production has supplanted energy-intensive output in the United States and Europe, some share of energy consumption that used to take place in the United States and Europe has in effect migrated to China. Or to put it more sharply, the United States and Europe have outsourced part of their energy consumption to China. As a result of the rapid rise in energy use, Beijing has put conservation—energy efficiency—at the very top of its priorities.10

As in other countries, climate change and emissions are becoming an increasingly important factor in reshaping China’s energy policies. But climate change is also a mechanism to tackle other more immediate and, from the Chinese point of view, much more urgent problems—environmental degradation, rising energy demand, and energy security. To reduce carbon is also to reduce air pollution and contain energy use, and thus modulate imports of energy.



POWER SURGE

In the second decade of the twenty-first century, one of China’s great challenges is to ensure that it has the electricity its rapidly growing economy needs and at the same time protect the economy against the environmental consequences of fast economic growth. For a number of years, China was adding on an annual basis the equivalent of the entire installed capacity of a France or a Britain. This averaged out to another new, full-sized coal-fired plant going into service every week or two. The tempo has slowed down somewhat, but enormous capacity is still being added on an annual basis.

It is hard to comprehend the scale and pace of growth. A dozen years ago, China’s generating capacity was not much more than a third that of the United States. Today it exceeds the United States. Between 2005 and 2010, China’s total electricity capacity doubled. It is as though the country built in just half a decade a new electrical system of identical size to the system in place in 2005! About 22 percent of new capacity added in 2009 was hydropower, and about 11 percent wind. Natural gas has just 2 percent. Still, the bulk of the new capacity—65 percent—is coal (lower than the 77 percent in 2005). But this also means that new, highly efficient, supercritical and ultra-supercritical coal plants, with more pollution controls, are being brought on line, while older, more-polluting and less-efficient coal plants are being retired early.

Coal will continue to be the mainstay of the electric power industry. As a result of growing demand for coal, China is no longer self-sufficient in that resource either. Once a significant coal exporter, China is the world’s second largest importer of coal.

But greater diversification among fuel sources will still be sought. A substantial part of the country’s target for non-fossil-fuel energy will be met by large hydropower plants. The Three Gorges Dam, which began producing electricity in 2003, has an installed power-generation capacity equivalent to about twenty nuclear plants. About 80 nuclear power plants are either under construction or in planning.

State Grid, the largest utility in the world, is spending about $50 billion a year to build what some consider the most technologically advanced grid system in the world. This is another way to promote efficiency. China needs what State Grid chairman Liu Zhenya calls a “strong and smart grid” to transport power thousands of miles from the west and the north across the country to the load centers on the east coast and in the center of the country. This would also reduce the heavy burden of coal transport by truck or rail. The huge wind potential of the sparsely populated Northwest is seen as particularly desirable. It is not only clean energy, but is also an accessible domestic source that can be harnessed to meet China’s future need. But it is only accessible with a vast expansion of long-distance transmission.11

In its 12th Five Year Plan, adopted in 2011, China put further emphasis on what it called its emerging-energy policy—to disproportionally push for alternatives to coal and oil, which means renewables (including hydropower), nuclear, natural gas, electric vehicles—and efficiency.



ENERGY AND FOREIGN POLICY

When it comes to oil, there are risks of a clash of interests between China and other countries, notably with the countries of Southeast Asia and Japan. How real these risks become will depend upon how the nations involved define and adjust their maritime positions.

In terms of relations with the United States, the real risks would come not from competition in the marketplace but would more likely arise when oil and gas development becomes embroiled in geopolitical concerns, foreign policy, and human rights issues. One of those issues was Sudan, where a Chinese-led consortium produces substantial amounts of oil. Venezuela could become an issue, as Hugo Chávez is deliberately trying to play a “China card”—bringing in Chinese investment and promoting China as an alternative market in his campaign against the United States. But that does not seem all that strong a hand.

But currently there is only one country where the risk of energy and foreign policy interests colliding is high. That country is Iran, in light of its nuclear program and pursuit of nuclear weapons. As a result, Iran presents the most complex, contentious, and potentially most difficult issue. Western and Japanese oil and natural gas companies have withdrawn or are in the process of withdrawing from Iran owing to its standoff with the United Nations over nuclear weapons and the growing body of sanctions. This creates a vacuum, and thus an opportunity for China to secure a significant position for its “go out” strategy in one of the major Middle East oil and gas producers. Chinese companies have negotiated, at least on paper, tens of billions of dollars of contracts for investment in the Iranian oil and gas industry that would provide access to substantial oil and gas resources, but they are not moving fast. At the same time, China has a larger interest in the stability of the entire Gulf region, on which it depends for a significant amount of its imports. Chinese companies have prominent roles in Iraq.

China has generally gone along with U.N. sanctions but has opposed them on the energy sector. As tensions mount, and votes come up in the U.N. Security Council, China’s economic links with Iran, and its willingness or unwillingness to restrict its own dealings with Iran, could become a critical focal point in its relations with the United States and Europe. That could engender, if not managed carefully, much wider tensions, affecting the structure of overall collaboration in the world community. In the words of the International Energy Agency, “what will happen to the largest investment” to which the Chinese companies “have committed remains unclear.”12



THE OVERLAP OF INTERESTS

So much has happened since the discussion that night at the end of the 1990s, in the chilly courtyard of the China Club restaurant in Beijing, about China’s need to benchmark itself against the global oil industry. Then China was only a minor part of a global industry. Today it is the single most dynamic, rapidly changing element in the global oil market. Yet the fast growth of Chinese energy consumption and surging oil imports brings uncertainty, both for China and for the other major importers. The potential for conflict gets most of the attention.

Yet there are also the common interests between China and other oil consumers, particularly the United States. These two countries are bound together—much more connected perhaps than many recognize—in the global networks of trade and finance that fuel economic growth. More specifically, they have shared interests as the world’s two largest petroleum consumers. The United States and China each import about half of their oil requirements. In the case of China, that share is likely to increase. Altogether, between them, they account for 35 percent of world petroleum consumption. Both benefit from stable markets, open to trade and investment, and improved energy security. But Chinese confidence needs to be enhanced in the reliability of the global market and the institutions maintaining its security. In turn, greater transparency about energy use and inventories in China would build confidence and create greater clarity for other importers. Both countries share common interests in encouraging greater energy efficiency, promoting innovation in renewables and alternative energy as well as conventional energy, and in managing carbon to reduce the threat of climate change. They have defined a common clean-energy agenda. Moreover, as holders of the world’s largest and second-largest coal reserves, they depend upon coal for substantial parts of their electricity generation, and thus share interests in finding a pathway to commercial clean coal.

When all this is added up, there is much room for cooperation. Such collaboration would improve the energy and economic positions of both countries. And that, in turn, would contribute to the security and well-being of both countries as well as that of the global community.


PART TWO

Securing the Supply


11

IS THE WORLD RUNNING OUT OF OIL?

Since the beginning of the twenty-first century, a fear has come to pervade the prospects for oil and also feeds anxieties about overall global stability. This fear, that the world is running out of oil, comes with a name: peak oil. It argues that the world is near or at the point of maximum output, and that an inexorable decline has already begun, or is soon to set in. The consequences, it is said, will be grim: “An unprecedented crisis is just over the horizon,” writes one advocate of the peak oil theory. “There will be chaos in the oil industry, in governments and in national economies.” Another warns of consequences including “war, starvation, economic recession, possibly even the extinction of homo sapiens.” The date of the peak has tended to move forward. It was supposed to arrive by Thanksgiving 2005. Then the “unbridgeable supply demand gap” was expected to open up “after 2007.” Then it would arrive in 2011. Now some say “there is a significant risk of a peak before 2020.”1

The peak oil theory embodies an “end of technology/end of opportunity” perspective, that there will be no more significant innovation in oil production, nor significant new resources that can be developed.

The peak may be the best-known image of future supply. But there is another, more appropriate, way to visualize the course of supply: as a plateau. The world has decades of further production growth before flattening out into a plateau—perhaps sometime around midcentury—at which time a more gradual decline will begin.



ABOVEGROUND RISKS

To be sure, there’s hardly a shortfall of risks in the years ahead. Developing the resources to meet the requirements of a growing world is a very big and expensive challenge. The International Energy Agency estimates that new development will require as much as $8 trillion over the next quarter century. Projects will grow larger and more complex and there is no shortage of geological challenges. 2

But many of the most decisive risks will be what are called “above ground.” The list is long, and they are economic, political, and military: What policies do governments make, what terms do they require, how do they implement their choices, and what is the quality and timeliness of decision making? Do countries provide companies with access to develop resources and do companies gain a license to operate? What is happening to costs in the oil field? What is the relationship between state-owned national oil companies and the traditional international oil companies, and between importing and exporting countries? How stable is a country, and how big are threats from civil war, corruption, and crime? What are the relations between central governments and regions and provinces? What are the threats of war and turmoil in different parts of the world? How vulnerable is the supply system to terrorism?

All of these are significant and sober questions. How they play out—and interact—will do much to determine future levels of production. But these are not issues of physical resources, but of what happens above ground.

Moreover, decision making on the basis of a peak oil view can create risks of its own. Ali Larijani, the speaker of Iran’s parliament, declared that Iran needs its nuclear program because “fossil fuels are coming to an end. We know the expiration date of our reserves.” Such an expectation is surprising coming from a country with the world’s second-largest conventional natural gas reserves and among the world’s largest oil reserves.3

This peak oil theory may seem new. In fact, it has been around for a long time. This is not the first time that the world has run out of oil. It is the fifth. And this time too, as with the previous episode, the peak presumes limited technological innovation and that economics does not really matter.



RUNNING OUT AGAIN—AND AGAIN

The modern oil industry was born in 1859 when “Colonel” Edwin Drake hit oil near the small timber town of Titusville in northwest Pennsylvania. It grew up in the hills and ravines surrounding Titusville in what has become known as the Oil Region. Other production centers also emerged in the late nineteenth century—in the Russian Empire, around Baku, on the Caspian Sea and in the Caucasus; in the Dutch East Indies; and in Galicia, in the Austro-Hungarian Empire. But Pennsylvania was the Saudi Arabia of the day—and then some—supplying Europe and Asia, as well as North America. The primary market for oil its first 40 years was illumination, to provide lighting, replacing whale oil and other fluids used in oil lamps. Petroleum quickly became a global business. John D. Rockefeller became the richest man in the world not because of transportation but because of illumination.

Yet oil flowing up from the earth’s interior was mysterious. Wells might send oil shooting up into the sky and then run dry for reasons no one knew. People began to fear that the oil would run out. The State Geologist of Pennsylvania warned in 1885 that “the amazing exhibition of oil” was only a “temporary and vanishing phenomenon—one which young men will live to see come to its natural end.” That same year, John Archbold, Rockefeller’s partner in Standard Oil, was told that the decline in American production was almost inevitable. Alarmed, he sold some of his Standard Oil shares at a discount. Later, hearing that there might be oil in Oklahoma, he replied, “Why, I’ll drink every gallon produced west of the Mississippi.” Yet not long after, new fields were discovered—in Ohio, Kansas, and then the huge fields of Oklahoma and Texas.4

Those new supplies appeared just in time, for an entirely new source of demand—the automobile—was rapidly replacing the traditional illumination market, which in any event was being crushed by electricity. The arrival of the motor car turned oil from an illuminant into the fuel of mobility.

In 1914 the European nations went to war thinking it would be a short conflict. But World War I turned into the long, arduous, and bloody battle of trench warfare. It also became a mechanized war. The new innovations from the late nineteenth and early twentieth centuries—cars, trucks, and planes—were, more rapidly than anyone had anticipated, pressed into large-scale military service. One of the most important innovations first appeared on the battlefield in 1916. It was initially code-named the “cistern” but was soon better known as the “tank.” As oil went to Europe to support the mobility of Allied forces, a gasoline famine gripped the United States. In fact, 1918 saw the highest gasoline prices, in inflation-adjusted terms, ever recorded in the United States. In order to help relieve the shortage, a national appeal went out for “Gasolineless Sundays,” on which people would abstain from driving. In response, President Wilson ruefully announced, “I suppose I must walk to church.”

By the time the war ended, no one could doubt oil’s strategic importance. Lord Curzon, soon to become Britain’s foreign secretary, summed it up: “The Allied cause had floated to victory upon a wave of oil.” But for the second time, the fear took hold that the world was running out of oil—partly driven by the surging demand growth from the internal combustion engine. Between 1914 and 1920, the number of registered motor vehicles in the United States grew fivefold. “Within the next two to five years,” declared the director of the United States Bureau of Mines, “the oil fields of this country will reach their maximum production, and from that time on we will face an ever-increasing decline.” President Wilson lamented, “There seemed to be no method by which we could assure ourselves of the necessary supply at home and abroad.”5

Securing new supplies became a strategic objective. That is one of the major reasons that, after World War I, the three easternmost oil-prospective provinces of the now-defunct Ottoman Turkish Empire—one Kurdish, one Sunni Arab, and one Shia Arab—were cobbled together to create the new state of Iraq.

The permanent shortage did not last very long. New areas opened up and new technologies emerged, the most noteworthy being seismic technology. Dynamite explosions set off sonic waves, enabling explorers to identify prospective underground formations and map geological features that might have trapped oil and gas. Major new discoveries were made in the United States and other countries. By the end of the 1920s, instead of permanent shortage, the market was beginning to swim in oil. The discovery of the East Texas oil field in 1931 turned the surplus into an enormous glut: oil plunged temporarily to as little as ten cents a barrel; during the Great Depression some gasoline stations gave away whole chickens as premiums to lure in customers.

The outbreak of World War II turned that glut into an enormous and immensely valuable strategic reserve. Out of seven billion barrels used by the Allies, six billion came from the United States. Oil proved to be of key importance in so many different aspects of the struggle. Japan’s fear of lack of access to oil—which, in the words of the chief of its Naval General Staff, would turn its battleships into “nothing more than scarecrows”—was one of the critical factors in Japan’s decision to go to war. Hitler made his fateful decision to invade the Soviet Union not only because he hated the Slavs and the communists, but also so that he could get his hands on the oil resources of the Caucasus. The German U-boat campaign twice came close to cutting the oil line from North America to Europe. The Allies, in turn, were determined to disrupt the oil supplies of both Germany and Japan. Inadequate supplies of fuel put the brakes on both General Erwin Rommel’s campaign in North Africa (“Shortage of petrol,” he wrote his wife; “It’s enough to make one weep”) and General George Patton’s sweep across France after the D Day landing.6

World War II ended, like World War I, with a profound recognition of the strategic significance of oil—and, for the third time, widespread fear about running out of oil. Those fears were heightened by the fact that, immediately after the war, the United States crossed a great strategic divide. No longer self-sufficient in petroleum, it became a net importer. But for a number of years, quotas limited imports to about 10 percent of total consumption.

Once again, the specter of global shortage receded, as the opening up of the vast fields of the Middle East and the development of new technologies led to oversupply and falling prices. This downward trend culminated in cuts in the world oil price in 1959 and 1960 by the major oil companies that brought five oil-exporting countries together in Baghdad in 1960 to found the Organization of Petroleum Exporting Countries—OPEC—in order to defend their revenues. Oil remained cheap, convenient, and abundant, and it became the fuel for the postwar economic miracles in France, Germany, Italy, and Japan.

But by the beginning of the 1970s, surging in petroleum consumption, driven by a booming world economy, was running up against the limits of available production capacity. At the same time, nationalism was rising among exporting countries, and tensions were mounting in the Middle East. The specter of resource shortage was in the air, prominently promoted by the Club of Rome study The Limits of Growth on “the predicament of mankind.” To wide acclaim, it warned that current trends would mean not only rapid resource depletion but also portended the unsustainability of industrial civilization.7

In October 1973 Arab countries launched their surprise attack on Israel, initiating the October War. In response to U.S. resupply of armaments to a beleaguered Israel, Arab exporters embargoed oil shipments. The oil market went into a hyperpanic, and within months petroleum prices quadrupled. They doubled again between 1978 and 1981 when the Iranian Revolution toppled the pro-Western shah and disrupted oil flows. All this seemed to be proof of the Club of Rome thesis of looming shortages. One most prominent scientist, a former chairman of the Atomic Energy Commission, warned: “We are living in the twilight of the petroleum age.” The CEO of a major oil company put it differently. The world, he said, had reached the tip of “the oil mountain,” the high point of supply, and was about to fall down the other side. This was the fourth time the world was said to be running out of oil.8

The fear of permanent shortage ignited a frantic search for new supplies and the double-time development of new resources. Major new provinces were discovered and brought on stream from Alaska’s North Slope and from the North Sea. At the same time, government policies in the industrial countries promoted greater fuel efficiency in automobiles and encouraged electric utilities to switch away from oil to increased use of coal and nuclear power.

The impact was enormous—and surprisingly swift. Within half a decade, what was supposed to be the permanent shortage turned into a huge glut. In 1986 the price of oil collapsed. Instead of the predicted $100 a barrel, it fell as low as $10 a barrel. Prices recovered in the late 1980s, spiked with the Gulf crisis in 1990, and then seemed to stabilize again. But, in the late 1990s, the Asian financial crisis precipitated yet another price collapse.



THE FIFTH TIME

By the beginning of the twenty-first century, oil prices were once again rebounding. It was around that time that fear about running out of oil began to gain prominence again, for the fifth time. But it was no longer “the oil mountain.” It was now something loftier—“the peak.” Accelerated growth of oil consumption in China and other emerging economies—and the sheer scale of prospective demand—understandably reinforced the anxiety about the adequacy of future supplies. Peak oil also became entwined with the rising concerns about climate change, and the specter of impending shortage provided further impetus to move away from carbon-based fuels.

The peak theory, in its present formulation, is pretty straightforward. It argues that world oil output is currently at or near the highest level it will ever reach, that about half the world’s resources have been produced, and that the point of imminent decline is nearing. “It’s quite a simple theory and one that any beer drinker understands,” one of the leaders of the current movement put it. “The glass starts full and ends empty and the faster you drink it the quicker it’s gone.” (Of course, that assumes one knows how big the glass is.) The theory owes its inspiration and structure, and indeed its articulation, to a geologist who, though long since passed from the scene, continues to shape the debate, M. King Hubbert. Indeed, his name is inextricably linked to that perspective—immortalized in “Hubbert’s Peak.”9



M. KING HUBBERT

Marion King Hubbert was one of the eminent earth scientists of his time and one of the most controversial. Born in Texas, he did all his university education, including his Ph.D., at the University of Chicago, where he folded physics and mathematics into geology. In the 1930s, while teaching at Columbia University in New York City, he became active in a movement called Technocracy. Holding politicians and economists responsible for the debacle of the Great Depression, Technocracy promoted the idea that democracy was a sham and that scientists and engineers should take over the reins of government and impose rationality on the economy. The head of Technocracy was called the Great Engineer. Members wore uniforms and saluted when the Great Engineer walked into the room. Hubbert served as its educational director for 15 years and wrote the manual by which it operated. “I had a box seat at the Depression,” he later said. “We had manpower and raw materials. Yet we shut the country down.” Technocracy envisioned a no-growth society and the elimination of the price system, to be replaced by the wise administration of the Technocrats. Hubbert wanted to promote a social structure that was based on “physical relations, thermodynamics” rather than a monetary system. He believed that a “pecuniary” system, misinformed by the “hieroglyphics” of economists, was the road to ruin.

Although cantankerous and combative, Hubbert was, as a teacher, demanding and compelling. “I found him to be arrogant, egotistical, dogmatic, and intolerant of work he perceived to be incorrect,” recalled one admiring former student. “But above all, I judged him to be a great scientist dedicated to solving problems based on simple physical and mathematical principles. He told me that he had a limited lifetime in which to train and pass on what he knew, and that he couldn’t waste his time with people that couldn’t comprehend.”

Hubbert did not have an easy relationship with his Columbia colleagues. When Columbia failed to give him tenure, he packed up and went to work as a geologist for Shell Oil.10

Collegiality was not one of his virtues. Coworkers found him abrasive, overly confident in his own opinions, dismissive of those who disagreed with him, and ill disguised in his contempt of those with different points of view.

“A gifted scientist, but with deep-seated insecurities,” in the words of one scholar, Hubbert was so overbearing that it was almost painful for others to work with him. At Shell, the young geologists assigned to him never managed to last more than a year. Finally, the first female geologist to graduate from Rice University, Martha Lou Broussard, was sent to him. “Overpopulation” was one of Hubbert’s favorite themes. During her job interview, he asked Broussard if she intended to have children. Then, in order to convince her not to, he told her to go to the blackboard to calculate at exactly what point the world would reach one person per square meter.

From Shell he moved to the U.S. Geological Survey, where he was in a permanent battle with some of his colleagues. “He was the most difficult person I ever worked with,” said Peter Rose, his boss at the USGS.

Yet Hubbert also became recognized as one of the leading figures in the field and made a variety of major contributions, including a seminal paper in 1957, “The Mechanics of Hydraulic Fracturing.” One of his fundamental objectives was to move geology from what he called its “natural-history phase” to “physical science phase,” firmly based in physics, chemistry, and in particular, in rigorous mathematics. “King Hubbert, mathematician that he is,” said the chief geophysicist of one of the oil companies, “based his look ahead on facts, logically and analytically analyzed.” Four decades after turning him down for tenure, Columbia implicitly apologized by awarding him the Vetlesen Prize, one of the highest honors in American geology.11



AT THE PEAK

In the late 1940s, Hubbert’s interest was piqued when he heard another geologist say that 500 years of oil supply remained in the ground. This couldn’t possibly be true, he thought. He started doing his own analysis. In 1956 at a meeting in San Antonio, he unveiled the theory that would forever be linked to his name. He declared that U.S. oil production was likely to hit its peak somewhere between 1965 and 1970. This was what became Hubbert’s Peak.

His prediction was greeted with much controversy. “I wasn’t sure they weren’t going to hang me from the nearest light post,” he said years later. But when U.S. production did hit its peak in 1970, followed by the shock of the 1973 embargo, Hubbert appeared more than vindicated. He was a prophet. He became famous.12

The peaking of U.S. output pointed to a major geopolitical rearrangement. The United States could no longer largely go it alone. All through the 1960s, even with imports, domestic production had supplied 90 percent of demand. No longer. To meet its own growing needs, the United States went from being a minor importer to a major importer, deeply enmeshed in the world oil market. The rapid growth of U.S. oil imports, in turn, was one of the key factors that led to the very tight oil market that set the stage for the 1973 crisis.

Hubbert was very pessimistic on the prospects for future supply. In tones reminiscent of the State Geologist of Pennsylvania in 1885, he warned that the era of oil would be only a brief blip in mankind’s history. In 1978 he predicted that children born in 1965 would see all the world’s oil used up in their lifetimes. Humanity, he said, was about to embark upon “a period of non-growth.”13



WHY SUPPLIES CONTINUE TO GROW

Hubbert used a statistical approach to project the kind of decline curve that one might encounter in some—but not all—oil fields, and then assume that the United States was one giant oil field. Hubbert’s followers have adopted that approach to global supplies. Hubbert’s original projection for U.S. production was bold and, at least superficially, accurate. His modern-day adherents insist that U.S. output has “continued to follow Hubbert curves with only minor deviations.” But it all comes down to how one defines “minor.” Hubbert got the date right, but his projection on supply was far off. Hubbert greatly underestimated the amount of oil that would be found—and produced—in the United States.

By 2010, U.S. production was four times higher than Hubbert had estimated—5.9 million barrels per day versus Hubbert’s 1971 estimate of no more than 1.5 million barrels per day—a quarter of the actual number.14

Critics point out that Hubbert left two key elements out of his analysis—technological progress and price. “Hubbert was imaginative and innovative in his use of mathematics in his projection,” recalled Peter Rose. “But there was no concept of technological change, economics, or how new resource plays evolve. It was a very static view of the world.” Hubbert also assumed that there was an accurate estimate of ultimately recoverable resources, when in fact it is a constantly moving target.

Although he seemed a stubborn iconoclast, even a contrarian, Hubbert was actually a man of his times. He made his key projections during the 1950s, an era of relatively low, and flat, prices and a period of technological stagnation. He claimed that he had fully assumed innovation, including innovation that had not yet occurred. Yet the impact of technological change was missing from his projections. The mid-1960s marked the beginning of a new era in technological advance and capabilities.15

Hubbert also insisted that price did not matter. Economics—the forces of supply and demand—were, Hubbert maintained, irrelevant to the finite physical cache of oil that can be extracted from the earth. Indeed, in the same spirit, those today who question the imminence of decline are often dismissed by peak adherents as “economists”—even if they are in fact geologists. Yet it is not clear why price—with all the messages it sends to people about allocating resources and making choices and developing new technologies—would apply in so many other realms but not in terms of oil. Activity goes up when prices go up; activity goes down when prices go down. Higher prices stimulate innovation and encourage people to figure out ingenious new ways to increase supply. The often-cited “proved reserves” are not just a physical concept, accounting for a fixed amount in the “storehouse.” They are also an economic concept—how much can be recovered at prevailing prices—and they are booked only when investment is made. And they are a technological concept, for advances in technology will take resources that were not physically accessible or economically viable and turn them into recoverable reserves.

The general history of the oil and gas industry, as with virtually all industries, is one of technological advance. New technologies are developed to identify new resources and to produce more from existing fields. For instance, in a typical oil field, only about 35 to 40 percent of the oil in place is produced using traditional methods. Much technology is being developed and applied to raising that recovery rate. That includes the introduction of the digital oil field of the future. Sensors are deployed in all parts of the field, including in the wells. This dramatically improves the clarity and comprehensiveness of data and the communication between the field and a company’s technology centers, and allows operators to utilize more powerful computing resources to process incoming data. If widely adopted, the “digital oil field” could also make it possible to recover, worldwide, an enormous amount of additional oil—by one estimate, an extra 125 billion barrels of oil—almost equivalent to Iraq’s reserves.16



THE SUPERGIANT

In the 2000s, the imminent decline of output from Saudi Arabia became a central tenet of peak oil theory. The argument focused on the supergiant Ghawar field, the largest oil field in the world. The first well was drilled in Ghawar in 1948, ten years after the original discovery of oil in Saudi Arabia. It took decades to really understand the extent of this extraordinary field, made more complicated by the fact that it is really a network of five fields, which have been developed over decades owing to Ghawar’s colossal size. The latest segment went into development only in 2006.17

The contention that Saudi Arabia’s overall production is in decline is somewhat odd, for Saudi capacity has increased in recent years. After more than sixty years, Ghawar is still, in the words of Saudi Aramco President Khalid Al-Falih, “robust in middle age.” Investment requirements are going up. But at a production rate of over 5 million barrels per day, Ghawar continues to be highly productive. The application of new technologies continues to unlock resources and open up new horizons.18



DISCOVERIES VERSUS ADDITIONS

As proof for peak oil, its advocates argue that the discovery rate for new oil fields is declining. But this obscures a crucial point. Most of the world’s supply is not the result of discoveries, but of reserves and additions. When a field is first discovered, very little is known about it, and initial estimates are limited and generally conservative. As the field is developed, better knowledge emerges about its reserves and production. More wells are drilled, and with better knowledge, proven reserves are very often increased.

The difference in the balance between discoveries and revisions and additions is dramatic. According to one study by the United States Geological Survey, 86 percent of oil reserves in the United States are the result not of what is estimated at time of discovery but of the revisions and additions that come with further development. The difference was summed up by Mark Moody-Stuart, the former chairman of Royal Dutch Shell, recalling his own days as an exploration geologist out in the field: “We used to joke all the time that much more oil was discovered by the petroleum engineers, developing and expanding the fields, than by us explorers, who actually found the fields.”

The examples provided by many fields and basins point to another fundamental weakness of Hubbert’s argument and its application to the entire world. In 1956 Hubbert drew a bell-shaped curve; the decline side would be the mirror image of the ascending side. Indeed, he made it so sharp on both sides that for some years it was called “Hubbert’s Pimple.” Some oil fields do decline in this symmetrical fashion. Most do not. They eventually do reach a physical peak of production and then often plateau and more gradually decline, rather than falling sharply in output. As one student of resource endowments has observed, “There is no inherent reason why a curve that plots the history of production of a type of fossil energy should have a symmetrical bell-shaped curve.”19

The plateau is less dramatic. But, based on current knowledge, it is a more appropriate image for what is ahead than the peak. And the world is still, it would seem, many years away from ascending to that plateau.



HOW MUCH OIL?

At the end of 2009, after a year’s worth of production, the world’s proved oil reserves were 1.5 trillion barrels, slightly more than were at the beginning of that year. That means that the discoveries and revisions and additions were sufficient to replace all the oil that was produced in 2009—a pattern common to many years. Replacing that production is one of the fundamental jobs of the worldwide oil industry. It is challenging and requires enormous investment—and a long time horizon. Work on a field whose reserves were judged proved in 2009 might have begun more than a decade earlier. Replacing reserves is even more challenging because of a natural decline rate in oil fields—on a worldwide basis, about 3 percent.

What are the prospects for the future? One answer is drawn from an analysis using a database that includes 70,000 oil fields and 4.7 million individual wells, combined with existing production and 350 new projects. The conclusion is that the world is clearly not running out of oil. Far from it. The estimates for the world’s total stock of oil keep growing.

The world has produced about 1 trillion barrels of oil since the start of the industry in the nineteenth century. Currently, it is thought that there are at least 5 trillion barrels of petroleum resources, of which 1.4 trillion is sufficiently developed and technically and economically accessible to count as proved plus probable reserves. Based upon current and prospective plans, it appears the world liquid production capacity should grow from about 93 million barrels per day in 2010 to about 110 mbd by 2030. This is about a 20 percent increase.20

But—and there are many buts—beginning with all the political and other aboveground risks that have been enumerated earlier. Moreover, attaining such a level in 2030 will require further development of current and new projects, which in turn requires access to the resources. Without access, the future supply picture becomes more problematic.

WORLD LIQUIDS PRODUCTIONS* 1946–2011

Millions of barrels per day


The Quest

Source: IHS CERA, EIA


Achieving that level also requires the development of more challenging resources and a widening of the definition of oil to include what are called non-traditional or unconventional oils. But things do not stand still. With the passage of time, the unconventionals become, in all of their variety, one of the pillars of the world’s future petroleum supply. And they help explain why the plateau continues to recede into the horizon.


12

UNCONVENTIONAL

H. L. Williams was both a spiritualist and a shrewd businessman. In the 1880s he began to organize séances on a ranch he had bought south of Santa Barbara, California, which he had named Summerland. He also went into real estate. He wrote other spiritualists, promising that Summerland could be “a beacon light to the world” and that there they could “better both the spiritual and material condition of mankind.” To make it easy for prospective members to gather for séances and summer camps, he sold them lots to build their own cottages for $25 each. But soon the lots were being feverishly resold for up to $7,500 each. Oil had been discovered beneath the lots.

Williams jumped into the oil business. The most productive wells were the ones closest to the beach. Why not go right out into the ocean? Williams built a series of piers and began drilling into the seabed.

Unfortunately, the offshore drilling did not work out that well, and production petered out within a decade or so. The piers were left derelict for many decades until they were finally washed away in a fierce storm. Yet while Summerland never fulfilled Williams’s great vision, he had achieved something else. He had pioneered offshore drilling.1

Today about 30 percent of total world oil production—26 million barrels per day—is produced offshore, in both shallow and deep waters. The total global deepwater output in 2010 was almost six million barrels per day— larger than any country except for Saudi Arabia, Russia, and the United States. Altogether, deepwater production could reach 10 million barrels by 2020.

Deepwater production is one of the building blocks of what is known as unconventional supply. These unconventionals are a varied lot. What joins them is that their development depends on the advance of technology. The unconventionals are an important part of today’s petroleum supply and will become even more important in the future.



LIQUIDS WITH GAS

The biggest source of nonconventional oil is something that has been part of the energy business for a long time, though not very well known. These are the liquids that accompany the production of natural gas. Condensates are captured from gas when it comes out of the well. Natural gas liquids are separated out when the gas is processed for injection into a pipeline. Both are similar to high-quality light oils.

Their output is increasing very fast, owing to the growth of natural-gas production worldwide and the building of new facilities in the Middle East. In 2010 these gas-related liquids added up to almost 10 million barrels per day. By 2030 they could be over 18 million barrels per day, roughly 15 percent of total world oil—or liquids—production.2



OUT OF SIGHT OF LAND

In the first decades of the twentieth century, following the early efforts of H. L. Williams and other pioneers, oil had continued to move offshore, but offshore had been limited to platforms in lakes in Texas and Louisiana and in Venezuela’s oil-rich Lake Maracaibo.

Drilling out in the ocean on freestanding platforms, subject to wave pressures and the tides, was an altogether different matter. After World War II, an independent company named Kerr-McGee decided to go out to sea because it figured that its best shot at “real class-one” acreage was offshore—mainly because the larger companies thought drilling offshore, out of sight of land, was probably impossible.

On a bright Sunday morning in October 1947, working ten and a half miles offshore with a cobbled-together little flotilla of surplus World War II ships and barges, Kerr-McGee struck oil. “Spectacular Gulf of Mexico Discovery,” headlined Oil and Gas Journal. “Revolutionary” was its judgment.3

An extended legal battle between the federal government and the coastal states, which went all the way up to the Supreme Court, slowed the development of the offshore industry in the United States. The fight was over turf—that is, as to whom the waters “belonged” and thus to whom would go the royalties and tax revenues. One result was the invention of the concept of the outer-continental shelf—the OCS—which was deemed the exclusive province of the federal government. The coastal waters of the states extended out just three miles—except in the cases of Florida and Texas, both of which had the heft to wrest nine miles from their struggle with Washington. By the end of the 1960s, the shallow waters of the offshore were starting to become a significant source of oil.

In January 1969 drillers at work on a well off the coast of Santa Barbara, not far from the original Summerland play, lost control. The well suffered a blowout, an uncontrolled release of oil. The well itself was capped. But then oil started to leak through a nearby fissure, creating an oil spill that blackened local beaches, put a halt to new drilling off the coast of California, and increased offshore regulation. The ooze on the beaches—and on oil-soaked birds—became one of the emblematic images in the nation’s new environmental consciousness. Santa Barbara also marked the beginning of a never-ending battle over offshore drilling that pitted environmental activists against oil and gas companies.



THE NORTH SEA AND THE BIRTH OF NON-OPEC

Yet nine months after Santa Barbara, toward the end of 1969, a new era opened in waters much harsher and challenging than those found off Santa Barbara—the stormy North Sea, between Norway and Britain. By then, oil companies had drilled 32 expensive wells in the Norwegian sector of the North Sea. All had come up dry. One of the companies, Phillips Petroleum, after drilling yet another dry hole, was about to give up and go back home to Bartlesville, Oklahoma. But then it decided to drill one more well—since it had already prepaid for the drilling rig. At the end of October 1969, it struck the Ekofisk oil field. It turned out to be a giant.

The offshore industry developed with remarkable speed—spurred by the 1973 oil embargo and the quadrupling of price, and by the push by Western governments for the development of secure, new sources of oil. Giant platforms, really mini-industrial cities, were built, some of them hundreds of miles out at sea. These structures, and the infrastructure that supported them, had to be designed to withstand winds up to 130 miles per hour and the terrifyingly destructive “100 Year Wave.” The North Sea came on line extraordinarily fast. By 1985 the North Sea—British and Norwegian sectors combined—was producing 3.5 million barrels per day, and it had become one of the pillars of what had already become known as “non-OPEC.”



TO THE FRONTIER

The North Sea was still in relatively shallow waters. In the United States, it seemed as though the “offshore” had gone about as far as it could—into depths of 600 feet of water, at the edge of the continental shelf. Beyond that the seabed falls away sharply, to depths of thousands of feet, which seemed well beyond the reach of any technology. Despondent about what seemed bleak future prospects, oilmen began to refer to the Gulf of Mexico as the “dead sea.”

But a few companies were trying to find a way to push beyond the shallow waters—both in the Gulf of Mexico and elsewhere, most notably the Campos Basin off the northeast coast of Brazil. Petrobras, Brazil’s state-owned oil company, was charged with reducing the nation’s heavy dependence on petroleum imports. In 1992, after years of work, Petrobras broke the deepwater barrier by successfully placing the Marlim platform in 2,562 feet of water.

Meanwhile, Shell Oil was using new seismic technologies to identify promising prospects in the deeper waters of the Gulf of Mexico. In 1994 its Auger platform—which towered twenty-six stories above the sea—went into production in 2,864 feet of water. It had taken nine years from the acquisition of the leases and an expenditure of $1.2 billion, and even within Shell it had been regarded as a huge gamble. Yet the resource proved much richer than anticipated, and eventually the complex was producing over 100,000 barrels a day. Augur opened up the deepwater frontier in the Gulf of Mexico and turned it into a global hot spot of activity and technological advance. The federal government’s lease sales for the deep waters of the Gulf of Mexico led to intense competitions for prospects among companies. The bonus payments and royalties made it a major revenue source for the government.4



The growth of the deepwater sector worldwide was extraordinary—from 1.5 million barrels a day in 2000 to 5 million by 2009. By that point, some 14,000 exploratory and production wells had been drilled in the deep waters around the world. It became customary to describe deepwater production as the great new frontier for the world oil industry. Among the most prospective areas were at the corners of what was called the Golden Triangle—the waters off Brazil and West Africa and the Gulf of Mexico. By 2009 the shallow and deep waters of the Gulf of Mexico together were supplying 30 percent of U.S. domestic oil production. That year, for the first time since 1991, U.S. oil production increased, instead of declining, and the deepwater was the largest single source of growth. In fact, in 2009 the Gulf of Mexico was the fastest-growing oil province in the world.5



DEEPWATER HORIZON

On the morning of April 20, 2010, a helicopter took off from the Louisiana coast and headed out over waters so smooth as to be almost glassy. Its destination was the Deepwater Horizon, a drilling platform operating 48 miles off the Louisiana coast. A fifth-generation semisubmersible drilling rig, the Deepwater Horizon was a marvel of scale and sophisticated engineering. The passengers that morning included executives from Transocean, which owned the drilling rig, and BP, which had been the contractor of the rig since it had been launched nine years earlier. They were flying out to honor the Deepwater Horizon and its team for its outstanding safety record.

The location was Mississippi Canyon Block 252, on a prospect known as Macondo. The Deepwater Horizon had been on site for eighty days. The well had descended through almost five thousand feet of water and then had pushed on through more than another 13,000 feet of dense rock under the seabed, where it had made another major Gulf of Mexico discovery and it was now almost at the end of the job. All that was left to do was plug the well with cement, and then the rig would move on to another site. At some later date, when a permanent production platform was in place, the Macondo well would be unplugged and would begin producing. The crew had encountered some frustrating problems along the way, notably what were called gas kicks from pockets of natural gas. At times Macondo had been called the “well from hell.” But now that all seemed behind them.

A decade earlier, Macondo would have been at the very edge of the frontier, but by 2010 the frontier in the Gulf of Mexico had moved beyond Macondo to discoveries as deep as 35,000 feet—twice that of Macondo.

Now, on board the Deepwater Horizon, it was a matter of wrapping up over the next few days—highly exacting and technically complex work, but also familiar in terms of what needed to be done. The night before, April 19, it was decided to dispense with a cement bond log, which would have provided critical data to determine if the well was sealed is a secure way. It was deemed unnecessary. Overall, things seemed to be proceeding normally.

At 7:55 p.m. the evening of April 20, final tests were concluded on the pressure in the well. After some discussion, the results were judged satisfactory. That was a misinterpretation. For deep down in the earth, many thousands of feet below the seabed, something insidious, undetected, was beginning to happen. Oil and, even more dangerous, gas were seeping through the cement that was meant to keep the well sealed.

At 9:41 p.m., the captain of a neighboring ship, the Damon Bankston, saw mud shooting up above the drilling rig with extraordinary force. He hurriedly called the Deepwater Horizon. The officer on the brig told him there was “trouble” with the well and to pull away as fast as possible. Then the line went dead.



“WE HAVE A SITUATION”

On the rig itself, one of the drillers called a superior in a panic. “We have a situation. The well has blown out.” People began to scramble, but the response in those critical minutes was hampered by confusion, poor communication, unclear information, and lack of training for that kind of extreme situation.

Yet there was still one last wall of defense—the 450-ton, 5-story-tall blowout preventer, sitting on the bottom of the ocean floor. Equipped with powerful pincerlike devices called shear rams, it was meant to slice into the pipe and seal the well, containing any potential blowout of surging oil and gas. It was the fail-safe device if all else failed, the final impregnable line of protection. The blowout preventer was activated. The unimaginable happened. The pincers failed to fully cut into the pipe—by 1.4 inches.

At about 9:47 p.m. there was a terrifying hissing sound. It was the worst sound that the crew could possibly hear. It meant that gas was escaping up from the well. The gas encountered a spark. At 9:49 a thundering explosion rocked the rig, and then a second blast, and a series more. The rig lost all its power. It heaved and shook violently. Whole parts of the structure were blown to pieces; stairways crumbled and disappeared altogether. Workers were tossed this way and that. The entire rig was engulfed in fierce flames.

Some crew members dove directly into the sea. Many piled into the two lifeboats, some dreadfully injured and in awful pain, and eventually made it to the Damon Bankston. Others were pulled from the sea. The Coast Guard arrived just before midnight and began a search-and-rescue mission. On April 22, two days after the accident, the Deepwater Horizon, gutted and deformed, sank. The next day the search for additional survivors was called off. Eleven of the 126 crew members had perished.6



THE RACE TO CONTAIN

At the time of the accident, no established methods existed for staunching the flow of a deepwater accident, other than the proper operation of the blowout preventer. If it failed, the only option was to drill a relief well that would intercept the damaged well so that it could be sealed. But that would take three months or more. Both industry and government seem, in retrospect, to have assumed that a catastrophe of such dimensions was impossible. It was an accident, said BP’s then chief executive Tony Hayward, that “all our corporate deliberations told us simply could not happen.”7

Over recent decades, a handful of serious accidents and major blowouts had occurred. The worst in terms of loss of life was a fire on the Piper Alpha platform in 1988, off the coast of Scotland, that took 167 lives. That disaster had led to major reforms in North Sea regulation and safety practices. The last big blowout in the Gulf of Mexico was a Mexican well in the Gulf of Campeche, off the Yucatán, in 1979. In August 2009, a well in the Timor Sea between Australia and Indonesia spilled up to 2,000 barrels a day for ten weeks. But no noteworthy blowouts had occurred in U.S. waters since Santa Barbara in 1969. Between 1971 and 2009, according to the U.S. Department of the Interior, the total number of barrels of oil that had spilled in federal waters as the result of blowouts was a miniscule 1,800 barrels—an average of 45 barrels a year.8

But now the unthinkable had happened, and the flow had to be stopped. The result was an overdrive process of high-tech engineering improvisation by BP, its contractors, other companies, outside specialists, government experts, and government scientists who knew little about oil to begin with but quickly became experts.

A whole host of approaches for stemming the flow were tried. They all failed. Finally, in mid-July, eighty-eight days after the accident, a newly designed capping stack was put in place. That ended the spill. No more oil was leaking out of the Macondo well. Two months later, on September 19, after the relief well connected with the original well, the government pronounced Macondo “effectively dead.”9



“FIGHTING THE SPILL”

In the Gulf itself, the fishing industry, whose boats could not go out, was hardest hit economically, along with tourism at beach resorts. The marshy coastal waters of Louisiana were among the areas worst affected.

As with the blowout itself, both government and industry were unprepared to deal with the environmental consequences. The Oil Pollution Act and the Oil Spill Liability Trust Fund had been established two decades earlier, in the aftermath of the Exxon Valdez accident in Alaska, to respond to an accident involving a tanker. But the loss of oil from a tanker, however serious, was a finite affair. A tanker only held so much oil.

The response to a blowout on this scale had to be invented. A vast navy of ships of all sorts, 6,700 in all, were deployed to intercept and capture the oil; onshore, a small army was similarly raised to clean up the beaches. Altogether, the clean-up campaign enlisted 45,000 people.

Some said that it would take decades for the Gulf to recover and that some parts of it might never recover. But in August 2010, the National Academies of Sciences estimated that three quarters of the spilled oil had already evaporated, been captured, or had dissolved. It was becoming clear that the consequences of Macondo would not be as severe as had first been feared.10

The sea itself provided a major solution. The natural seepage of oil from fissures in the bottom of the Gulf—estimated to be as much as a million barrels of oil a year—combined with the warm waters, had nurtured microbes known as hydrocarbonolostic, whose specialty is feasting on oil. For them, Macondo oil was an unexpected bonanza, and they went to work on it. As a result, the oil biodegraded and disappeared much faster than had been expected. On September 20, 2010, the day after the official announcement that the well had been killed, the New York Times reported that the environmental consequences were proving far less long lasting than feared. “As the weeks pass, evidence is increasing,” said the Times, that “the gulf region appears to have escaped the direst predictions of the spring.”11 Over the next several months, further research confirmed that the microbes had eliminated much of the oil and gas that had leaked from the well. As one scientist put it, “The bacteria kicked on more effectively than we expected.”12

Many uncertainties about the longer-term consequences remain—as to whether a damaging carpet of Macondo oil has settled over the Gulf’s floor around the well, the impact on the delicate marshes and wetlands along the coast, and the long-term effect on aquatic life and wildlife. Only time will tell.



THE GOVERNMENT AND THE COMPANY

For many years, 85 percent of the U.S. outer-continental shelf had been closed to drilling. On March 31, 2010, three weeks prior to the accident, President Barack Obama had begun the process of opening areas off the coast of Virginia and in the eastern Gulf for future exploration. The opposition from his own political base was intense. After the accident, these areas were quickly withdrawn and once again put off-limits.13

The Obama administration placed a moratorium on all drilling in the Gulf of Mexico. In due course, the moratorium was officially lifted. But it seemed clear that a de facto slow pace was going to prevail for some time, as a result of more thorough reviews and re-reviews, more complex and time-consuming regulation, a slowing-down of decision making, and a possible immobilization of decision making altogether. The Obama administration reorganized the regulatory apparatus for the offshore to avoid any hint of “coziness” between regulators and industry. Safety officials now had to carry their own lunches when they flew a couple of hundred miles out to inspect platforms, and they were prohibited from accepting anything once there, even a bottle of cold water on a hot day.



The accident and its consequences demonstrated that the abilities to explore and produce in the deep water had run ahead of the capacity to deal with a failure of all the safety systems. Under extreme duress, the learning about what to do had been compressed from years into months. Several companies came together in the aftermath to establish, with an initial billion dollars, a nonprofit Marine Well Containment Corporation that would have the skills and tools, in the event of a major accident, to close a well quickly and clean up the spill. Two dozen other companies formed the Helix Well Containment Group, a deepwater containment consortium that can rapidly provide expertise and equipment in the event of an accident. Helix is the company whose equipment was used to actually shut the Macando well.

As to the cause of the accident, the conclusion (as is so often the case in a postmortem on a major accident) is that the cause was not one thing but rather a series of errors, omissions, and coincidences—human judgment, engineering design, mechanical, and operational—all interacting to build to a crescendo of disaster. Were one single incident not to have occurred, there might not have been a disaster.14

That was certainly the conclusion of the national commission appointed by President Obama. “The well blew out because a number of separate risk factors, oversights, and outright mistakes combined to overwhelm the safeguards meant to prevent just such an event from happening,” it said. The commission continued, “But most of the mistakes and oversights at Macondo can be traced back to a single overarching failure—a failure of management.” It added, “A blowout in deep water was not a statistical inevitability.” The diagnoses and debate about what had gone wrong—and what could be learned from the experience—will go on for years.15

The resource-rich deep waters of the Gulf of Mexico will likely remain one of the main pillars of domestic U.S. energy supply. The offshore oil industry has considerable economic as well as energy significance. In 2010 about 400,000 jobs depended upon the offshore industry just in the four Gulf states of Texas, Louisiana, Mississippi, and Alabama. Moreover, the offshore oil and gas industry could generate as much as a third of a trillion dollars of government revenues in taxes and royalties over a ten-year period.16

But the Gulf of Mexico was clearly going to be more quiet and less active, at least for a few years ahead. In response, some of the drilling rigs, the workhorses of exploration, began to leave the Gulf and migrate to other parts of the world that still saw the deep water as one of the great frontiers of world energy.



THE PRESALT: THE NEXT FRONTIER

The most obvious destinations were the other points in the Golden Triangle—West Africa and, more than anything else, Brazil. By this time, Brazil had already leapfrogged ahead of the United States to become the world’s largest deepwater producer. “We had to find oil,” said José Sergio Gabrielli, the president of Petrobras. “We didn’t find any onshore and so we had to go offshore.” Today Brazil is on track to become one of the world’s major oil producers, exceeding Venezuela, which for almost a century has been the dominant producer in Latin America. The reason is a major advance in capabilities that has opened up a massive new horizon.

The offshore Santos Basin stretches 500 miles, paralleling the southern coast of Brazil. Beneath the seabed is a layer of salt, averaging more than a mile thick. Oil had been produced beneath salt in other areas, including the Gulf of Mexico, but never through so large a section. It was thought that there might be oil below the salt layer in the Santos Basin, but it seemed impossible to do the seismic work—mapping the underground structures—because the salt dispersed the seismic signals so much that they could not be interpreted. “The breakthrough was pure mathematics,” said Gabrielli. “We developed the algorithms that enabled us to take out the disturbances and look right through the salt layer.”

The first discovery was the Parati field. Petrobras was also drilling with its partners BG and Galp in the Tupi field, the most difficult well the company had ever undertaken. It cost $250 million and went through 6,000 feet of water and then another 15,000 feet under the seabed. It required significant new technologies to cope with the peculiarities of the salt layer, which, like sludge, keeps shifting.

When Guilherme Estrella, Petrobras’s head of exploration, reported to the board on the outcome of the well, he began with a long discussion about what had happened 160 million years ago when the continents of Africa and Latin America had pulled apart, depositing the salt above the oil reservoirs, which were already in place and thus became known as the presalt.

“As we listened to him,” said Gabrielli, “we thought that Estrella is a great geologist, but that he was dreaming. But then he told us the numbers, and we were thrilled.”

That well had discovered a supergiant field—at least 5 billion to 8 billion barrels of recoverable reserves—the biggest discovery since Kashagan in Kazakhstan in 2000. As other wells have been drilled, it has become clear that the presalt in the Santos Basin could be a huge new source of oil. Brazil’s then president, Luiz Inácio Lula da Silva, described it as “a second independence for Brazil.”17

If development proceeds more or less as planned and there are no major disappointments, Brazil could, within a decade and a half, be producing close to six million barrels per day, which would be twice the current output of Venezuela. The investment would be huge—half a trillion dollars or more—but it would catapult Brazil to the top rank among the world’s oil producers, making it one of the foundations of world supply in the decades ahead.



FROM FRINGE TO MAINSTREAM: CANADIAN OIL SANDS

In April 2003, a few weeks after the start of the Iraq War, a U.S. Senate hearing convened to examine international energy security issues. The chairman of the foreign relations subcommittee was startled by what he heard. “Something very dramatic has happened that people have not much focused on,” said one witness. It was “the first major increase in world oil reserves since the mid-1980s.” But it was not in the Middle East. It was often said that Iraq had the second-largest oil reserves in the world. But that was no longer true. Canada had just made an extraordinary upward adjustment in its proven oil reserves—from 5 billion barrels to 180 billion, putting it in the number two position, right after Saudi Arabia.18

At first, surprise, even skepticism, greeted the Canadian announcement. But it has come to be generally accepted in the years since. This particular unconventional petroleum resource—Canadian oil sands—also happens to be strategically placed on the doorsteps of the United States.

For many years, oil sands—sometimes called tar sands—had seemed, at best, almost beyond the fringe of practicality and were generally dismissed as of little significance. Yet over the last few years, the oil sands have proved to be the fastest-growing source of new supplies in North America. Their expanding output will push Canada up in the rankings to be the fifth-largest oil-producing country in the world. The significance for the United States is great. If the “oil sands” were an independent country, they would be the largest single source of U.S. crude oil imports.19

The oil sands are found primarily in the northern part of the Canadian province of Alberta, including an area known as the Athabasca region. These sands are composed of viscous bitumen embedded in sand and clay. This asphaltlike bitumen, a form of very heavy oil, is a solid that for the most part does not flow like conventional oil. That is what makes its commercial extraction so challenging. But when the weather is warm, a little bit of the bitumen does ooze out of the ground as thick, tarlike liquid. In earlier centuries local Indians would use that seep to caulk their canoes.

In the first decades of the twentieth century, a few scientists intrigued by these seeps, along with promoters lured by the visions of riches, began to make the trek to the Athabasca River in northern Alberta and the isolated outpost of Fort McMurray—a cluster of a dozen log buildings connected to the outside world by mail delivery four times a year, weather permitting. The expeditions found indications that the sprawling swampy lowlands around Fort McMurray were rich in oil sand deposits, but there was no obvious way to extract the resource. In 1925 a chemist at the University of Alberta finally found a solution for separating the bitumen from the sand and clay and getting it to flow—but only in his laboratory. Decades of research failed to overcome the baffling challenge of extracting a liquid oil out of the sands in any commercial way.

But a few refused to give up on the oil sands. One of them was J. Howard Pew, the chairman of Sun Oil, who, as one of his colleagues said, was “enamored of the resource up there.” In 1967 Sun launched the first at-scale oil sands project. “No nation can be long secure in this atomic age unless it is amply supplied with petroleum,” said Pew. “Oil from the Athabasca area must of necessity play an important role.” The sands at what was called the Great Canadian Oil Sands Project were mined, and then treated above ground so as to turn the bitumen into a liquid. But for many years the results from the Great Canadian Oil Sands were anything but great. The venture encountered one engineering problem after another.20

In addition to the great technical challenges, the operating conditions were daunting. In the winter, the temperature dropped to–40°F. The swampy terrain, known as muskeg, freezes so hard that a truck can be driven on it. In the spring, it turns into such a swampy bog that a truck can sink so far into it that you lose it.

The business environment was also tough. In the 1970s Canada adopted a highly nationalistic, high-tax national energy policy. It may have reflected the temper of the time, but it was ill suited for a high-risk, multiyear, multibillion-dollar enterprise. Development stalled as companies packed up and went elsewhere to invest.



MEGA-RESOURCE

It was not until the late-1990s that the oil sands finally began to prove themselves as a large-scale commercial resource, facilitated by a crucial tax reform and less-rigid government intervention, and by major advances in technology. The mining process was modernized, expanded in scale, and made more flexible. Fixed conveyer belts were replaced with huge trucks with the biggest tires in the world, and with giant shovels that gather up oil sands and carry them to upgraders that separate out the bitumen. Refining processes then upgrade the bitumen into higher-quality synthetic crude oil, akin to light, sweet crude oil, which can be processed in a conventional refinery into gasoline, diesel, jet fuel, and all the other normal products.

At the same time, a breakthrough introduced an alternative way of producing oil sands—not with mining but rather in situ (Latin for “in place”); that is, with the crucial link in the production chain done in place—underground. This was very significant for many reasons, including the fact that 80 percent of the oil sands resource is too deep for surface mining.

The in situ process uses natural gas to create superhot steam that is injected to heat the bitumen underground. The resulting liquid—a combination of bitumen and hot water—is fluid enough to flow into a well and to the surface. The best-known process is SAGD—for steam-assisted gravity drainage, and pronounced as “sag-dee. It has been described as “the single most important development in oil sands technology” in a half century.21

Altogether, since 1997, over $120 billion of investment has flowed into Alberta’s oil sands, now defined as a “mega-resource.” Oil sands production more than doubled from 600,000 barrels per day in 2000 to almost 1.5 million barrels per day in 2010. By 2020 it could double again to 3 mbd—an output that would be higher than the current oil production of either Venezuela or Kuwait. Adding in its conventional output, Canada could reach almost 4 mbd by 2020.



Yet the development of oil sands brings its own challenges. The projects are large industrial developments in relatively remote areas. In terms of new oil development, they are among the highest in cost, especially when competition heats up for both labor and equipment. The offsetting factor is that there is no exploration risk, the resource does not deplete in the way that a conventional oil well does, and the projects will have a very long life.

One environmental challenge arises from the local impacts of mining development, which are visually dramatic. But they are also limited. To date, the entire footprint from mining oil sands is an area that adds up to about 230 square miles of land in a province of Alberta that is about the size of Texas. When part of a surface mine is exhausted, the operators are required to restore the land to its original condition. Mining wastes, a sort of yogurtlike sludge, are deposited in tailing ponds. These toxic ponds, like the rest of the industry, are regulated by the province. Recently the regulatory authorities have required new processes to further reduce the impact of these pools. Altogether the tailing ponds cover an area equivalent to about 66 square miles.22

The other significant environmental issue is definitely not local and is also the most controversial. This is greenhouse gas emissions, in particular carbon dioxide (CO2), associated with the in situ production process. These emissions are higher than the emissions released from the production of the average barrel of oil because of the heat that must be generated underground to get the bitumen to flow.

How much greater is the impact compared with conventional oil? The best way to assess the impact is from a “well to wheels” analysis. That measures the total CO2 emitted along the entire chain, from the initial production to what is burned in the auto engine and comes out the tailpipe. A range of studies finds that a barrel of oil sands adds about 5 to 15 percent more CO2 to the atmosphere than an average barrel of oil used in the United States. The reason the difference is so small is that, by far, most of the CO2 is produced by the combustion in an auto engine and comes out of the tailpipe.23

The technologies for producing oil sands continue to evolve, and increasing ingenuity is being applied to shrinking the environmental footprint and reducing the CO2 emissions in the production process. As the industry grows in scale, it will require wider collaboration on the R&D challenges not only among companies and the province of Alberta but also with Canada’s federal government.

Yet the very scale of the resource, and its reliability, puts a premium on its continued evolution of this particular industry. Oil sands are, after all, an enormous resource. For the 175 billion barrels of recoverable oil sands is only 10 percent of the estimated 1.8 trillion barrels of oil sands “in place.” The development of the other 90 percent requires further technological progress.



ABOVEGROUND RISKS

The only other concentration of unconventional oil resources in the entire world that rivals Canada’s oil sands is the Orinoco belt in the interior of Venezuela. There, too, the oil is in the form of bitumen embedded in clays and sands. With new technologies and a good deal of investment, the potential output of the Orinoco is huge. Yet what might have been anticipated in terms of supplies from the Orinoco has been much reduced in recent years—not because of limits of the resource itself but because of what has happened aboveground.

May Day, 2007, began in Venezuela with a show of strength. The army swept in to seize oil facilities in the Faja, the Orinoco Oil Belt. This was a prelude to the moment when President Hugo Chávez, dressed in red fatigues, took to the platform in the industrial complex of José to announce to assembled oil workers what was already obvious—he was taking over this vast industrial enterprise. “This is the true nationalization of our natural resources,” he proclaimed as jets streaked overhead. To underline the point, behind him hung a giant banner that read, “Full Oil Sovereignty. Road to Socialism.” His audience was oil workers who had traded their normal blue helmets for revolutionary-red helmets and had donned red T-shirts celebrating nationalization.

This was one of a long series of steps by Chávez to subordinate the country’s political institutions and economy to his Bolivarian Revolution. But the Orinoco was a unique prize. Covering 54,000 square miles and stretching 370 miles, it contains an estimated 513 billion barrels of technically recoverable reserves. But that is far larger than what currently is economically recoverable. And, as in Canada, the overall potential is still that much greater—as much as 1.3 trillion barrels.

The Orinoco’s bitumen is very difficult to produce. Like the oil sands in Canada, the extra heavy oil (EHO) of the Orinoco Belt is so heavy and gunky that it cannot easily flow. Limited production began in the 1970s, but was greatly constrained by costs and technology.

To extract significant amounts of resource and then refine it into flowing oil would require a great deal of investment and advanced technology. In the 1990s Venezuela had neither. The Orinoco was too big and complex for the state oil company, PDVSA, to go it alone. The Orinoco became the most high-profile part of the petroleum opening, or la apertura, under which in the 1990s Venezuela invited international companies back as partners or service providers.

A half dozen international companies partnered there with PDVSA, investing upwards of $20 billion. They also pushed the technology. Within a decade, the joint ventures had gone from nothing to more than 600,000 barrels a day, with the promise of much more to come.

But with Chávez’s Bolivarian Revolution, it was clearly only a matter of time before the Orinoco was taken over. And what better day than May Day to announce, as Chávez did, that the Orinoco had to be nationalized “so we can build Venezuelan socialism.” He declared, “We have buried the ‘petroleum opening.’ ” And for good measure, he thundered, “Down with the U.S. empire.”



Some of the Western companies remained, but in more subordinate roles. New operators—Vietnamese and Russians, among others—came in. The Venezuelan government held out the objective of tripling the Orinoco’s output to 2 million barrels per day by 2013. Others questioned if even current production levels could be maintained, given the financial and technical challenges. After all, oil output elsewhere in Venezuela was already in decline because of lack of investment and loss of managerial talent.

Still, May 1, 2007, was a day of triumph for Chávez. It was a little more uncertain for the workers, who had to listen to his speech for an hour and a half under the hot sun and were unsure about their new owner. “Our bosses made us come,” said one worker. “We didn’t want to get fired.” And, to make sure that everyone showed up, attendance was taken on the buses that ferried them to the speech.

And so there, under that hot sun at the Jose Industrial Complex, was both the spectacle of another victory for the Bolivarian Revolution and its leader, and at the same time, a very visible demonstration, amid one of the world’s richest concentrations of resources, of the meaning of aboveground risk—in this case clad in revolutionary red.24



MOTHER NATURE’S PRESSURE COOKER

Despite the diversity of the range of unconventional oils, a common theme ties them together. It is all about finding a way to unlock resources whose existence may have long been recognized but for which recovery on a commercial scale had seemed impossible.

Those breakthroughs are yet to happen with what is called oil shale. Oil shale contains high concentrations of the immature precursor to petroleum, kerogen. The kerogen has not yet gone through all the millions of years in Mother Nature’s pressure cooker that would turn it into what would be regarded as oil. The estimates for the oil shale resource are enormous: 8 trillion barrels, of which 6 trillion are in the United States, much of it concentrated in the Rocky Mountains. During the gasoline famine of World War I, National Geographic predicted that “no man who owns a motor-car will fail to rejoice” because this oil would provide the “supplies of gasoline which can meet any demand that even his children’s children for generations to come may make of them. The horseless vehicle’s threatened dethronement has been definitely averted.” But then early hopes for oil shale were completely buried by its high costs, lack of appropriate technology, and an abundance of conventional oil.

At the end of the oil crisis decade of the 1970s, amid the panic and shock of the Iranian Revolution, a vigorous campaign was launched in Washington, D.C., to create a new industry that would provide 5 million barrels per day of synthetic fuels and, in addition, give the nation “a psychological lift of ‘doing something’ instead of just doing without.” The Carter administration instituted an $88 billion program that would cost many tens of billions of dollars to develop those “synfuels” as the way to ensure energy independence. Oil shale was at the top of the list. Petroleum companies announced major projects. But within a couple of years, the projects were abruptly terminated. The oil shale campaign was done in by the rising surplus of petroleum in the world market, the falling price, and the way in which the costs for developing oil shale were skyrocketing—even without any commercial production having begun.25

Yet today a few hardy companies, large and small, are at work on oil shale again. They are still trying to find new and more economic approaches for speeding up nature’s time machine and turning kerogen into a commercial fuel without having that several-million-year wait. One line of research parallels the in situ process for oil sands and seeks to heat the kerogen underground.

There are still other types of unconventional oils that may grow in scale and importance over the next few years, notably oil made by processing coal or natural gas. The former is done, notably, in South Africa; and the latter, in Qatar. Both require heavy engineering. But high costs hold back both processes from further significant expansion, at least so far.



TIGHT OIL

The newest breakthrough is opening the prospect of a big new source of oil, something that was not even expected a few years ago. This new resource is often confusingly called “shale oil,” which can be totally mixed up with “oil shale,” which it is not. Thus, both for clarity’s sake and because it is found in other kinds of rocks as well, it is becoming better known as tight oil. People have recognized for a long time that additional oil was locked inside shale and other types of rock. But there was no way to get this oil out—at least not in commercial volumes.

The key was found on the fringes of the industry, in a huge oil formation called the Bakken, which sprawls beneath the Williston Basin across North and South Dakota and Montana and into Saskatchewan and Manitoba in Canada. The Bakken was one of those places where smaller operators drilled wells that delivered just a few barrels a day. By the late 1990s, most people had given up on the Bakken, writing it off as “an economically unattractive resource.”26

But then the impact of the technology for liberating shale gas—horizontal drilling and hydraulic fracturing—be came evident. “As shale gas began to grow, we asked ourselves ‘Why not apply it to oil?’ ” said John Hess, CEO of Hess, one of the leading players in the Bakken. The new technologies worked. Companies rushed to stake out acreage, and a boom in tight oil began to sweep across the Bakken. Production in the Bakken increased dramatically, from less than 10,000 barrels per day in 2005 to more than 400,000 in 2010. In another several years it could be 800,000 barrels per day or even more.27

The technique is spreading. Formations similar to the Bakken, with such names as the Eagle Ford in Texas, and Bone Springs in New Mexico, and Three Forks in North Dakota, are becoming hot spots for exploration.

Although still in the early days of tight oil, initial estimates suggest that there might be as much as 20 billion barrels of recoverable tight oil just in the United States. That is like adding one and a half brand-new Alaska North Slopes, without having to go to work in the Arctic north and without having to build a huge new pipeline. Such reserves could potentially be reaching two million barrels per day of additional production in the United States by 2020 that was not even anticipated even half a decade ago. Although there is hardly any calculation of the tight oil resources in the rest of the world, the numbers are likely to be substantial.



What all the unconventional resources have in common is that they are not the traditionally produced onshore flowing oil that has been the industry staple since Colonel Drake drilled his well in Titusville in1859. And they are all expanding the definition of oil to help meet growing global demand. By 2030 these nontraditional liquids could add up to a third of total liquids capacity. By then, however, most of these unconventional oils will have a new name. They will all be called conventional.28


The Quest

UNCONVENTIONALS: THE NEW GEOGRAPHY OF OIL AND GAS

Technology is unlocking what were previously unavailable energy resources.

Source: IHS CERA


13

THE SECURITY OF ENERGY

Energy security may seem like an abstract concern—certainly important, yet vague, a little hard to pin down. But disruption and turmoil—and the evident risks—demonstrate both its tangibility and how fundamental it is to modern life. Without oil there is virtually no mobility, and without electricity—and energy to generate that electricity—there would be no Internet age.

But the dependence on energy systems, and their growing complexity and reach, all underline the need to understand the risks and requirements of energy security in the twenty-first century. Increasingly, energy trade traverses national borders. Moreover, energy security is not just about countering the wide variety of threats; it is also about the relations among nations, how they interact with each other, and how energy impacts their overall national security.

The interdependence of energy has been a fact of international life for centuries. Beginning in the sixteenth century, the boom in the need for wood—used for shipbuilding and construction but, most important, for domestic heating—led to the integration of Norway and Sweden, and then North America to some degree, into the European economy.1

But the point at which energy security became a decisive factor in international relations was a century ago, in the years just preceding the First World War. In 1911 Winston Churchill, then First Lord of the Admiralty, made the historic decision, in his words, to base Britain’s “naval supremacy upon oil—that is, to convert the battleships of the Royal Navy from coal to oil.” Oil would make the ships of the Royal Navy faster and more flexible than those of Germany’s growing navy, giving Britain a critical advantage in the Anglo-German naval race. As Churchill summed it up, switching to oil meant “more gun-power and more speed for less size or cost.”2

But the move to oil created a new challenge: a daunting problem of supply. While the U.S. Navy was behind the Royal Navy in considering the move from coal to oil for its battleships, it at least could call on large domestic supplies. Britain had no such resources. Conversion meant that the Royal Navy would rely not on coal from Wales, safely within Britain’s own borders, but rather on insecure oil supplies that were six thousand miles away by sea—in Persia, now Iran.

Critics argued at the time that it would be dangerous and foolhardy for the Royal Navy to be dependent upon the risky and insecure nation of Persia—what one official called “an old, long-mismanaged estate, ready to be knocked down.” That was hardly a country on which to rely for a nation’s most vital strategic resource.

Churchill responded with what would become a fundamental touchstone of energy security: diversification of supply. “On no one quality, on no one process, on no one country, on no one route, and on no one field must we be dependent,” he told Parliament in July 1913. “Safety and certainty in oil lie in variety and variety alone.” That precept has proved itself again and again.3



THE RETURN OF ENERGY SECURITY

Since the start of the twenty-first century, a periodically tight oil market and volatile prices have fueled new concern about energy security. Other factors also add to the concern: the instability in some oil-exporting nations, jihadist terrorism, the rebirth of resource nationalism, fears of a scramble for supplies, the costs of imported energy, and geopolitical rivalries. The turmoil that swept over much of North Africa and the Middle East in 2011 disrupted supplies and added a fear premium to the oil price. Underlying everything else is the fundamental need of countries—and the world—for reliable energy with which to power economic growth.

Energy security concerns are not limited to oil. Natural gas was formerly a national or regional fuel. But the development of long-distance pipelines and the growth of liquefied natural gas (LNG) have turned natural gas into much more of a global business. Electric power blackouts in North America—such as the one that shut down the northeast of the United States in 2003—and in Europe and Russia, generate worries about the reliability of electricity supply systems.

Hurricanes Katrina and Rita, which struck the Gulf of Mexico’s energy complex in a one-two punch in 2005, created something that the world had not seen, at least in modern times: an integrated energy shock. Everything seemed connected, and everything was down at the same time: oil and natural gas production and undersea pipelines in the Gulf of Mexico, and—onshore—receiving terminals, refineries, natural gas processing plants, long-distance pipelines, and electricity. The storms showed how fundamental was the integrity of the electricity system on which the operation of everything else depended, be it the refineries and communications systems, or the pipelines that take supplies to the rest of the country—or the gas stations, which lacked the electric power to operate their pumps. The huge earthquake and tsunami that struck Japan in 2011 killed more than 15,000 people, devastated a major part of the country, and set off a nuclear accident. It also took down the region’s power system, knocking out services, immobilizing communication and transportation, disrupting the economy and global supply chains, and paralyzing efforts to respond to the tragedy.

In China, India, and other developing countries, chronic shortages of electric power demonstrate the costs of unreliability. The Internet and reliance on complex information-technology systems have created a whole new set of vulnerabilities for energy and electric power infrastructure around the world by creating entry paths for those who wish to disrupt those systems.



THE DIMENSIONS

The usual definition of energy security is pretty straightforward: the availability of sufficient supplies at affordable prices. Yet there are several dimensions. First is physical security—protecting the assets, infrastructure, supply chains, and trade routes, and making provision for quick replacements and substitution, when need be. Second, access to energy is critical. This means the ability to develop and acquire energy supplies—physically, contractually, and commercially. Third, energy security is also a system—composed of the national policies and international institutions that are designed to respond in a coordinated way to disruptions, dislocations, and emergencies, as well as helping to maintain the steady flow of supplies. And, finally and crucially, if longer-term in nature, is investment. Energy security requires policies and a business climate that promote investment and development to ensure that adequate supplies and infrastructure will be available, in a timely way, in the future.

Oil-importing countries think in terms of security of supply. Energy-exporting countries turn the question around. They talk of “security of demand” for their oil and gas exports, on which they depend to generate economic growth and a very large share of government revenues—and to maintain social stability. They want to know that the markets will be there, so that they can plan their budgets and justify future levels of investment.



THE LIMITS OF “ENERGY INDEPENDENCE”

In the United States, the issue of energy security often gets framed in terms of energy independence. That phrase has been a political mantra since first articulated by President Richard Nixon in his November 1973 “Project Independence” energy policy speech. Just three weeks earlier, an unthinkable—and yet also foreseeable—event had occurred. The Arab oil exporters, wielding the “oil weapon,” had embargoed oil supplies to Western countries in response to the United States’ hurried resupply of weapons to a beleaguered Israel, reeling from a surprise attack on Yom Kippur in October 1973. Oil prices were on a trajectory to quadruple. In his speech, Nixon deliberately modeled his Project Independence plan on the goal that his old rival John F. Kennedy had set for the Apollo project in 1961, of “landing a man on the moon and returning him safely to the earth” within ten years. But Nixon sought to outdo Kennedy, pledging in his own speech that the United States would “meet our own energy needs without depending on any foreign energy source”—and do it not in ten years, but in seven.

This bold promise startled his own advisers, for they did not see how it could be achieved. “I cut the reference to ‘independence’ three times from the drafts,” recalled one of his speechwriters, “but it kept being put back in. Finally, I called over, and was told that it came from the Old Man himself.”

The phrase not only stayed in the speech but has remained part of the political vocabulary ever since. Every president after Nixon has evoked energy independence as a prime objective. It resonates powerfully with the American public and comes imbued with a nostalgia for a more manageable time when prices were low and the United States really could go it alone. After all, the United States had once been the world’s number one oil exporter.4

As events have turned out, getting a man on the moon proved easier than making a nation energy independent—or at least oil independent. (In terms of overall energy—including natural gas, coal, nuclear, and renewables—the United States was 78 percent self-sufficient in 2011.) In the almost four decades since Nixon’s speech, the United States has gone from importing a third of its oil to importing, on a net basis, to about 60 percent at the peak. In 2011 imports had declined to about 50 percent.

Is energy independence a realistic goal for a country with a $15 trillion economy that is deeply enmeshed in the global economy? Some argue that the term “energy independence” is misconstrued, that it should not be taken as meaning virtually import-free, but rather as connoting “not vulnerable.” Generally, however, it is understood to mean self-sufficiency. Yet its promotion, no matter how compelling, can lead to expectations about quick fixes and easy adjustments that are at odds with the realities of the U.S. energy position and the complexity and scale of its energy system. The result can be disappointment and cynicism that, together, drive cycles of inconsistency in energy policy and leave the United States no less vulnerable. Overemphasizing something that is an aspiration, rather than a goal that can be realized in a reasonable time frame, can corrode the international relations that are critical to energy security in an interdependent world. And it runs the risk of diverting attention from the more complex agenda of energy security. But perhaps the imperatives of political communication require the mantra of energy independence. As one senator put it, “Energy independence really means energy security.”5



STRATEGIC SIGNIFICANCE

The 1973 oil crisis may have provided the proof that the era of energy self-sufficiency for the United States was already over. Yet it seemed that most Americans did not know, at least until the crisis, that the United States imported oil—or they simply did not believe it. Thus, they concluded, the price surge had to be the result of price manipulation by oil companies. Nor did they know that the gas lines in which they waited (and in which they were to wait again in 1979, after the Iranian Revolution) were mainly the result of government price setting and allocations that prevented supplies from getting to the cities where they were needed, and instead sent them to the countryside, where they were not needed. Those gas lines set off a chain reaction of anger, accusations, and rumors of all kinds (“tankers brimming with oil were circling offshore, just beyond the horizon”), multiple congressional hearings, many investigations, acrimonious battles over price controls, and a tumultuous ocean of litigation.

The shock was hardly limited to the United States. The embargo—and the massive disruption that it engendered—created surprise, panic, chaos, shortages, and economic disarray around the world. It generated a mad scramble for oil among companies, traders, and countries. Government ministers climbed on planes and personally scoured the world for petroleum supplies. The shock was further aggravated by what it seemed to portend—a massive shift in the global political and economic balance of power away from the importing countries and the “North” scorned to the exporters and the “South,” to what was then known as the Third World.

Among the Western governments themselves, the embargo created enormous strain and antagonism as they struggled to respond, blamed one another, and sought to outmaneuver each other in securing supplies. Some sought special relationships with the exporting countries that would give them what they thought would be privileged access to supplies. Indeed, this was widely regarded as the worst crisis, and the most fractious, to afflict the Western alliance since its foundation after World War II.

The acid spirit of the times was captured during the hurriedly convened Washington Energy Conference of 1974 when the French foreign minister, angry that the other European countries were cooperating with the United States, greeted his fellow European ministers with “Bonjour, les traîtres”—“Hello, traitors.”6



TOWARD AN INTERNATIONAL REGIME

Yet out of the rancorous Washington energy conference emerged the International Energy Treaty of 1974. It outlined a new energy security system that was meant to deal with disruptions, cope with crises, and avert future bruising competitions that could destroy an alliance. It provided for coordination among industrialized countries in the event of supply interruptions, and encouraged parallelism and collaboration among their energy policies. At the same time, it was meant to serve as a deterrent against any future use of an “oil weapon” by exporters. That system—refined, updated, and broadened in the years since—remains the foundation for energy security today and provides the ballast of confidence during times of uncertainty and danger. At its most basic, this system is meant to keep member nations supplied with energy and the global economy functioning, and thus prevent deep recessions—or worse.

The treaty established the International Energy Agency (IEA) as the main mechanism for meeting these objectives. The IEA was also meant to provide a common front for the industrial countries and thus counterbalance OPEC, the Organization of Petroleum Exporting Countries. OPEC had been founded in 1960 after the major oil companies cut the price of oil, the major source of income for the countries. In the first decade after its founding, OPEC had labored in obscurity. Indeed, it even failed to gain diplomatic recognition from the Swiss and ended up having to move its headquarters from Geneva to Vienna. But at the beginning of the 1970s, with the tightening oil market and rising nationalism, the major oil-exporting countries took control of the world market, and OPEC was their mechanism to do so. So dominant did OPEC appear to be in the mid-1970s that some spoke about an “OPEC Imperium.” The IEA was intended to provide a means for the consuming countries to counteract that new imperium.

Now headquartered on the Left Bank in Paris and looking up from its windows toward the Eiffel Tower, the IEA currently numbers 28 industrial countries as members. It provides continued monitoring and analysis of energy markets, policies, technologies, and research. As such it operates as a kind of “energy conscience” for national governments.



EMERGENCY STOCKS

One of the IEA’s core responsibilities is to coordinate the emergency sharing of supplies in the event of a loss of supplies. Under the International Energy Treaty, each member is meant to hold strategic oil stockpiles, either government-owned public stocks, such as the Strategic Petroleum Reserve in the United States, or in government-controlled stocks that private companies are required to hold. These stocks can be released on a coordinated basis in the event of a disruption and can be complemented, in a severe disruption, with measures to help temporarily bring down demand. Of course, it is up to national governments to decide whether to implement any of the measures.

Currently, IEA nations have about 1.5 billion barrels of public stocks, of which about 700 million barrels are in the U.S. Strategic Petroleum Reserve. Were Iranian exports to disappear from the market, the 1.5 billion could compensate for the shortfall for more than two years.

The U.S. Strategic Petroleum Reserve (SPR), along with the other IEA stocks, can be thought of as a giant insurance policy. Yet, often enough, when prices rise at the gasoline pumps, so do temptations and calls to “do something”—which means release oil from the SPR in order to bring prices down. That would have the effect of turning the reserve into a de facto tool for price controls. Tempting, for sure, but not the wisest policy.

Releasing oil under those circumstances would prevent price signals from reaching consumers with the message that there is a problem in the marketplace so that they can modulate their consumption. That could make a bad situation get worse. It would also drain oil from the reserves that might be needed in a more serious situation in the future. Hasty use of the SPR could well dissuade friendly producing countries from stepping up their own output because petroleum from the SPR is going to flow into the market. Putting SPR oil into the market might temporarily send prices down, but then they might bounce right back, raising the question of whether to drain yet more oil from the reserves. Finally, the whole history of price controls does not provide much confidence about how deft government can be at using strategic stocks as a tool of market management.


The Quest

OPEC AND THE IEA: THE BALANCING ACT

OPEC represents key oil-exporting countries; the IEA was founded to represent importing countries.

Sources: OPEC; IEA


Decisions about the use of strategic reserves will always require judgment, an evaluation of a wide variety of factors, including the level of commercial inventories, and consultation among consumers and with key producing nations. Ambiguity about its use can help to temper a “sky’s the limit” psychology. But the essential point was made by Lawrence Summers, when he was treasury secretary in the Clinton administration, during a White House debate about using the reserves: “The SPR was created to respond to supply disruptions,” and not as a means “simply to respond to high prices or a tight market.” These stockpiles are an antidote to panic, a source of confidence, and a deterrent to actions that might otherwise interrupt supplies.7



Since the system’s inception 30 years ago, IEA members have only three times triggered an actual emergency drawdown of strategic stockpiles. The first time was during the Gulf crisis of 1990–91. In January 1991, just before hostilities commenced, the IEA coordinated a release from strategic stockpiles around the world. The other coordinated release occurred in the summer of 2005, to deal with a different kind of disruption—that of Hurricanes Katrina and Rita. One can be sure that the founders of the IEA never contemplated that the emergency sharing system would be used for a disruption in the United States. The third time it was used was 2011, in response to the persisting loss of supply from the Libyan civil war and concern about the impact of high prices on economic recovery.

Over time, the IEA has evolved, and today one of its missions is to help promote dialogues with non-IEA consuming countries and with energy-exporting countries, OPEC and non-OPEC alike. This reflects a larger shift in relations among oil-importing and oil-exporting countries, away from the confrontation of the 1970s to what has become known as consumer-producer dialogue.8 If the International Energy Treaty was the foundation for the development of a global energy security system, then the development of the producer-consumer dialogue represented the next stage in its development.

The first public step toward a producer-consumer dialogue was a seminar at the Hotel Kleber in Paris in the first two days of July 1991. The Gulf War had ended just a few months earlier. As the October War had set the framework for confrontation, so now the Gulf War had reset the framework and opened the door to dialogue. For in coordination with consumers, OPEC countries had ratcheted up production to compensate for the loss of output from Iraq and Kuwait. (Of course, several of them, led by Saudi Arabia, were also members of the coalition, and protecting Saudi Arabia’s oil fields against Iraq was one of the major objectives of the coalition.) This demonstrated what was now perceived as shared interests in energy security and stability in oil markets. After the meeting the French minister of industry reported that the seminar had allowed the delegates to “break certain taboos and even to propose joint projects. The era of confrontation, we hope, is over; dialogue and communication must take its place.” Not everyone was ready to break all the taboos. To maintain a certain distance, the U.S. delegation insisted on not sitting at the main table but rather at a sort of little “children’s table” off to the side.

Efforts at a dialogue gained momentum, although, initially, somewhat furtively. It took a year to arrange, but in 1994 the head of the IEA went to Vienna to meet with the head of OPEC. Still, it was a secret get-together and it was conducted out of the office, over a private out-of-sight lunch at a Viennese restaurant. That was the beginning of a continuing exchange, in a variety of forums, on everything from energy security, investment regimes, and volatility in oil prices, to the aging of the workforce, carbon capture and storage, and—of some importance—improving the transparency and quality of energy data. The exporting countries had come to hold significant stakes in the growth and health of the global economy, which, after all, is the market for their oil and where much of their sovereign wealth funds are invested. For the consuming countries, lingering taboos dissipated with time. By 2009 the G8 industrial countries were calling upon “both producers and consumers to enhance transparency and strengthen their dialogue” and move “toward a more structured dialogue” among “producing, transit and consuming countries.”9

The mechanism for this dialogue became the International Energy Forum. One of its missions is to spearhead JODI—the Joint Oil Data Initiative. The objective is to provide a more complete and transparent view of supply and demand and inventories so that world markets can operate on the basis of better information. The countries participating in the forum represent 90 percent of global oil and natural gas production and demand. Both the IEA and OPEC are members.

The producer-consumer dialogue provides a framework for communication; it responds to the interests that both sides have owing to their interdependence in terms of a vital commodity. But it certainly has its limits. The real test is not how it works during a time of stability but during a time of stress. During the price spike of 2008, it provided a mechanism for trying to restore stability to the market. Without it, the spike might have gone even higher, with greater damage to the global economy. The renewed oil market turmoil of 2011 and the sharp division among OPEC exporters—particularly Saudi Arabia versus Iran and Venezuela—showed those limits. Saudi minister Ali Naimi captured that when he described the June 2011 OPEC meeting as “one of the worst meetings we ever had.” This was a demonstration that any dialogue really depends on the relationships not among blocs but among specific nations and how they see their interests and the degree to which they can act upon those interests.



OPERATING SYSTEMS

Experience in the decades since the creation of the IEA has highlighted broad principles that underpin the emergency system and inform all of the dimensions of energy security.

The starting point is what Winston Churchill urged a century ago—diversification of supply. Multiplying one’s sources of oil, and one’s sources of energy, reduces the impact of a disruption by providing alternatives. This should serve the interests not only of consumers but also of those producers for whom stable markets are a long-term concern.

Resilience should be ingrained in the energy system, ensuring a security margin that provides a buffer against shocks and facilitates flexibility and recovery after disruptions. Resilience can include sufficient spare production capacity in oil-exporting countries and, of course, strategic reserves like the SPR. It extends to adequate storage capacity along the supply chain and backup stockpiling of equipment and critical parts for electric power production and distribution, such as transformers for substations. Hurricanes Katrina and Rita and the 2011 Japanese earthquake and tsunami highlight the need to develop plans for recovery from disruptions that devastate large regions.

Overall, the reality of integration needs to be recognized. Only one oil market exists. This market is a complex, worldwide system that moves and consumes almost 90 million barrels of oil every day. Let there be a disruption in one part of the world, and the effects will reverberate throughout the market. Security resides in the stability of this market. Secession from the global market is not an option, except at very great cost.

Experience has consistently demonstrated the importance of high-quality information and data for well-functioning markets and future investment. The Energy Information Administration, an independent arm of the U.S. Department of Energy, and the International Energy Agency, along with the new International Energy Forum, contribute to meeting that need. Access to reliable and timely information becomes particularly urgent in a crisis, when a mixture of actual disruptions, rumors, media imagery, and outright fear stokes panic among consumers. Accusations, acrimony, outrage, the pressures of the news cycle, the dusting off of familiar scripts, and a fevered hunt for conspiracies—all these can obscure the realities of supply and demand, transforming a difficult situation into something much worse. Particularly at such times, governments and the private sector need to collaborate to counter the tendency toward panic and guesswork with the antidote of high-quality, highly timely information.

Markets—large, flexible, and well-functioning energy markets—contribute to security by absorbing shocks and allowing supply and demand to respond more quickly and with much greater ingenuity than is possible within a controlled system. Markets can often more efficiently and effectively—and more quickly—resolve shortfalls and disruptions than more centralized direction.

When troubles do arise and the calls “to do something” grow loud, governments do well to be cautious, to the degree they can, in responding to the short-term political pressures and the temptation to micromanage markets. However well meaning, intervention and controls can backfire, slowing and even preventing the moving around of supplies to mitigate disruptions and speed adjustment.

The gas lines in the 1970s were, as already noted, self-inflicted by rigid government policies—price controls and a heavy-handed federal allocation system that seriously misallocated gasoline. In other words, policy prevented markets from working.

In 2005 the huge disruption to supply resulting from Hurricanes Katrina and Rita seemed destined to create shortages, which—compounded by rumors of price gouging and stations’ running out of supplies—could have swiftly generated gas lines. But that is not what happened. In contrast to the 1970s, steps were taken to help markets shift supplies around more quickly and reduce the impact of the crisis.

Instead of adding new regulatory restrictions—two critical ones were eased. Non-U.S.-flagged tankers were permitted to pick up supplies trapped on the Gulf Coast by the nonoperation of pipelines and carry them around Florida to the East Coast. The “boutique gasoline” regulation, requiring different blends of gasolines for different cities, was temporarily lifted to allow the shifting of supplies from cities that were relatively well supplied to cities where there were potential shortages. Overall, the calls for controls were resisted. The markets moved back into balance much sooner, and prices came down much faster, than had been generally expected.

Energy security still needs to be expanded in response to changes in the infrastructure of information technology, the transformation of the world economy itself, and the need to protect the entire supply chain.



CYBERATTACK: “A BAD NEW WORLD”

The sea-lanes are not the only kind of routes that are vulnerable. The threats to energy security loom large in a different kind of geography—cyberspace. In 2010 the U.S. director of national intelligence identified cybersecurity as one of the top threats to the United States. The “information infrastructure,” warned his Annual Threat Assessment, is “severely threatened.” The assessment added: “We cannot be certain that our cyberspace infrastructure will remain available and reliable during a time of crisis.” Since then, one of the authors of the report has said “the situation has become worse.” Even those entities that are considered to be the most highly protected, such as financial institutions and sophisticated IT companies, have been subject to successful attacks. After Sony suffered a major cyberattack, its CEO summarized the situation this way: “It’s not a brave new world; it’s a bad new world.”

For obvious reasons, the electric power system is ranked among the most critical of all infrastructures. One report described the vastness of the North American power infrastructure this way: “Distributed across thousands of square miles, three countries, and over complex terrain (from the remote plains and Rocky Mountains to major urban areas), the bulk power system is comprised of over 200,000 miles of high-voltage transmission lines, thousands of generation plants, and millions of digital controls.” It is also one of the most complicated to secure. After all, it has been built up over decades. In the 1960s and 1970s, computers were deployed to manage the generation and distribution of electricity and to integrate the grid. In the years since, the system has become more sophisticated and integrated. This makes the system far more efficient, but it also makes it more vulnerable.10

The potential marauders may be recreational hackers, who, despite their benign appellation, can do great damage, as can a disaffected employee. They can be cybercriminals, seeking to steal money or intellectual property, or gain commercial advantage, or create situations from which they can profit. They can be governments engaged in espionage or positioning for, or actually conducting, cyberwarfare. Or they can be terrorists or other non-state actors using digital tools to wreak havoc and disrupt their avowed enemies. For all of these, the electric grid is a very obvious target, for its disruption can immobilize a large segment of a country and do great harm.

The tools available to the cyberattacker are extensive. They can mobilize networks of computers to mount a “bot attack” aimed at denial of service, shutting down systems. They can introduce malware—malicious software—that will cause systems to malfunction. Or they can seek, from remote locations, to take control of and disrupt systems.

One point of entry is through the ubiquitous SCADA systems, the supervisory control and data acquisition computer systems that monitor and control every kind of industrial process. Originally, they were site specific, but now they are connected into larger information networks. Malicious intruders may gain access through a thumb drive and a desktop computer. A multitude of new entry points are provided by the proliferation of wireless devices and possibly by the smart meters that are part of the smart grid and that provide two-way communications between homes and the electrical distribution system.11

A test at a national laboratory in 2007 showed what happened when a hacker infiltrated an electric system. A SCADA system was used to take control of a diesel generator and cause it to malfunction; it shook and shuddered and banged until it eventually blew itself up in a cloud of smoke. The Stuxnet virus that slipped into the Iranian centrifuges in 2010 caused them to spin out of control until they self-destructed.

It is not just the power system that is at risk. Obviously, other systems—involving energy production, pipelines, and water—share similar vulnerabilities, as do all the major systems across an economy.

In response to this threat, nations are struggling to design the policies to meet this threat. The U.S. Department of Defense has created a Cyber Command. It is also developing a new doctrine in which a major attack on critical infrastructure, including energy, could constitute an “act of war” that would justify military retaliation. The Council of Europe has established a convention on cybersecurity to guide national policies. But these need to be matched by efforts by companies and bolstered with considerable investment and focus. New security architectures have to be introduced into systems that were designed without such security in mind. And they need to be coordinated with other countries. After all, it takes only 135th of a millisecond for an attack to hit a server from anywhere in the world.

Can active defense prevent a cyberattack that seriously damages electricity or some other major energy system, with all the dangerous consequences that can flow from it? Will the risks be properly anticipated and acted upon? Or will the analysis have to wait until a national commission goes back after a “cyber Pearl Harbor” and assesses what went wrong and what was missed—and what could have been done. “In the nineteenth century, steamboats regularly blew up,” one study noted, “but Congress waited 40 years until a long series of horrific accidents led to safety regulations.” At a recent meeting of 120 experts on cybersecurity, the question was asked: How long before a destructive cyberattack on the country? The consensus answer was bracing: within three years.12



BRINGING CHINA AND INDIA “INSIDE”

One of the fundamental reasons for establishing the IEA in the 1970s was to prevent that mad scramble for barrels that had sent prices spiraling upward and threatened to rip apart the Western alliance. It worked, establishing a system for more durable and constructive cooperation. That same kind of approach is needed now with China and India to help ensure that commercial competition does not turn into national rivalries, thus preventing future scrambles that inflame or even rupture relations among nations in times of stress or outright danger. Both China and India have moved from the self-sufficiency and isolation of a few decades ago to integration into the global economy. The energy consumption of both is rising rapidly; in 2009 China became the world’s largest energy consumer. Neither China nor India is a member of the IEA, and neither looks likely to become one anytime soon, both because of membership rules and their own interests.

Yet even if they do not join, they can collaborate closely. If they are to engage on energy security, they have to come to the conclusion that their interests can be served and protected in global markets—that the system is not rigged against them and that they will not be disadvantaged compared with other countries in times of stress. And they would have to decide that participation, either formally or informally, with the international energy security system will assure that their interests will be better served in the event of turbulence than going it alone. China, India, and Russia all now have memorandums of understanding with the IEA. Given their growing scale and their importance, their participation is essential for the system to work more effectively.



SECURING THE SUPPLY CHAIN

Energy security needs to be thought of not just in terms of energy supply itself but also in terms of the protection of the entire chain through which supplies move from initial production down to the final consumer. It is an awesome task. For the infrastructure and supply chains were built over many decades without the same emphasis on security as would be the case today. The system is vast—electric power plants, refineries, offshore platforms, terminals, ports, pipelines, high-voltage transmission lines, distribution wires, gas storage fields, storage tanks, substations, etc. The vulnerabilities of such extensive infrastructure take many forms, from outright hostile assaults to the kind of small events that can trigger a massive blackout.


The Quest

CHOKE POINTS FOR WORLD OIL

The secure passage of tankers through narrow shipping channels is crucial to the global economy.

Sources: EIA; ICC-CCS


As the energy trade becomes more global and crosses more borders and grows in scale on both land and water, the security of the supply chains is more urgent. Ensuring their safety requires increased collaboration among both producers and consumers. Critical choke points along the sea routes create particular vulnerabilities for the transport of oil and LNG, whether from accidents, terrorist attacks, or military conflict.

The best known of these choke points is the Strait of Hormuz, which separates the Persian Gulf (with more than a quarter of world oil production) from the Indian Ocean. Another key point is the Malacca Strait—the five-hundred-mile-long, narrow, and constricted passage between Malaysia and the Indonesian island of Sumatra that funnels in from the Indian Ocean, curves up around Singapore, and then widens out again into the open waters of the South China Sea. At its most narrow, it is only 40 miles in width. About 14 million barrels per day pass through this waterway, as does two thirds of internationally traded LNG—and half of all of world trade. Some 80 percent of Japan’s and South Korea’s oil and about 40 percent of China’s total supply traverse the strait. Pirates prey upon these waters, and there have been reports of terrorist plans to seize an oil tanker and wreak havoc with it.

Another key choke point is the Bosporus Strait—just 19 miles long, a little over two miles at its widest, and a half mile at its most narrow, connecting the Black Sea to the Sea of Marmara and on into the Mediterranean. Every day more than three million barrels per day of Russian and Central Asian oil pass through it, right down through the middle of Istanbul. Two other critical choke points are both in the Middle East: the Bab el-Mandeb Strait, which provides entrance at the bottom of the Red Sea between Yemen and Somalia for up to three million barrels per day, and the hundred-mile-long Suez Canal and Sumed Pipeline, which together connect the top of the Red Sea to the Mediterranean and through which pass about two million barrels per day of oil plus major shipments of LNG. There is also the Panama Canal, with 0.6 million barrels per day.13

Recent years have revealed a new risk—or really the return of an old one. More open ocean waters—the world’s ungoverned geographical spaces—have become noticeably more dangerous. The area around the Horn of Africa—the Gulf of Aden, which leads to the Bab el-Mandeb Strait, and the western waters of the Indian Ocean, south of the Arabian Peninsula—has become the arena for pirates operating out of Somalia and neighboring countries. With that has come what has been described as a “radicalization of maritime piracy,” as cooperation increases between pirates and terrorist groups. Pirate attacks on shipping, including oil and LNG tankers, seem almost a daily occurrence. Using larger mother ships, the pirates operate as far as a thousand nautical miles from their bases on shore. European, U.S., Russian, Chinese, and Indian naval forces are all now active in those waters seeking to repel and deter pirate attacks.14

Because these waters are the main route for the tankers carrying oil and LNG from the Persian Gulf to Europe and North America, and because of the proximity to the Gulf itself, this surge in piracy adds a further dimension to the security concerns for the region that holds well over half of the world’s proved oil reserves. The energy security of the region known as the Gulf is truly a global question.


14

SHIFTING SANDS IN THE PERSIAN GULF

Even as the dimensions of energy security have become wider, the world’s concerns always seem to circle back to oil, and that means, as it has for so many years, back to the Middle East and the Persian Gulf. The risks today center on terrorism, the stability of societies, and Iran’s nuclear program and its drive to dominate the Gulf.

The Gulf countries produce more than a quarter of total world oil output and hold almost 60 percent of proved reserves, making the region of central importance to the world oil market and the global economy. North Africa produces another 5 percent. But over the decades, out of the Gulf and the larger Middle East have come a series of crises that disrupted global oil supply.

The first was the 1956 Suez crisis. Egypt’s expropriation of the Suez Canal triggered an invasion by Britain and France—along with Israel, which was threatened by Egyptian military pressure. The closure of the Suez Canal created an oil shortage in Europe. It was relieved by a surge in output from the United States, which at that point had surplus capacity. One consequence of the Suez crisis was to spur a technological advance in the development of larger tankers that could sail around Africa instead of using the canal.

In 1967 Arab oil exporters reacted to Israel’s victory in the Six Day War with an oil embargo against the United States, Britain, and West Germany. However, this embargo failed, owing to what was at the time a large surplus in the world petroleum market. Seven years later, the 1973 embargo responded to the U.S. resupply of Israel following the Yom Kippur surprise attack. In contrast to 1967, the embargo was highly successful, owing to the tight market. It triggered a fourfold increase in the price of oil. The embargo, combined with the price increases, shook the structure of international relations and sent shock waves through the global economy, followed by several years of poor economic performance. The 1978–79 Iranian Revolution, which toppled the shah and ushered in the theocratic Islamic Republic, also ignited a worldwide panic in the petroleum market and another oil shock that contributed mightily to the difficult economic years of the early 1980s.

Saddam Hussein’s 1990 invasion of Kuwait set off the Gulf crisis, leading to the loss of five million barrels a day of supply from Iraq and Kuwait. Other producers, notably Saudi Arabia, cranked up output and largely replaced the missing barrels over the next several months, even before Operation Desert Storm evicted Saddam’s forces from Kuwait. It was in anticipation of that military operation that the International Energy Agency organized the first-ever coordinated release of strategic stocks.

For more than a decade thereafter, there were no petroleum disruptions in the region. Then the 2003 invasion of Iraq shut down its oil industry. Production resumed, though erratically. The reduced output from Iraq was part of the aggregate disruption that contributed to the price spike of 2008.

All this transpired over the course of a half century in the region that is the breadbasket of world oil production.

The unique energy position of the Gulf is the product of a peculiar geologic history that has made it the most prolific hydrocarbon basin on the planet. Over hundreds of millions of years ago, what is now much of the Arabian Peninsula and the Persian Gulf basin was submerged beneath a vast, shallow sea. The recurrent expansion and shrinking of this sea created excellent conditions for the deposit of organic material in successive layers of sediment. During the times when the sea receded, the land was not a desert but a warm and humid jungle. Temperatures much hotter than they are today encouraged lush growth, which added to the organic sediments. Pressure and heat turned this organic material into hydrocarbons—oil and gas. The shifts in the earth’s crust and the clash of tectonic plates, on a geological time scale, created huge structures for trapping these hydrocarbon deposits. And it was in those structures that in the twentieth century the drill bit found the extraordinary accumulations of oil and gas that define the modern Persian Gulf.



“THE CENTER OF GRAVITY OF WORLD OIL”

In 1943, in the middle of World War II, the Roosevelt administration dispatched Everette Lee DeGolyer to the Persian Gulf to assess the petroleum potential of the region. DeGolyer was America’s preeminent geologist; he had made the discovery in 1910 that opened up Mexico as a great oil producer, and in the 1920s he did more than anyone else to promote the introduction of seismic technology.

Oil had originally been discovered in Iran in 1908; then in Iraq, in 1927; then in Bahrain, in 1932. Still, some were skeptical of what might be found in Saudi Arabia. In 1926 the senior management of one petroleum company decided that Saudi Arabia was “devoid of all prospects” of oil and that big reserves would most likely be found in Albania. In the 1930s, after several years of disappointment and dry holes, even the companies exploring in Saudi Arabia debated “whether the venture should be abandoned” and “written off as a total loss.” But then came the transformative discoveries—Anglo-Persian (later BP) and Gulf Oil hit petroleum in Kuwait, at a well called Burgan Number One, in February 1938. The next month, Chevron and Texaco did the same in Saudi Arabia, with Dammam Number Seven. Although many of the wells were capped and operations suspended during World War II, some people, including DeGolyer, suspected that these discoveries might rewrite the geopolitics of world oil. “It is uncertain,” he wrote his wife as he embarked on the trip, “and a little bit hazardous.” Yet “it seemed pretty important,” he added, “for some American to make this trip and size up the situation.”

The survey confirmed DeGolyer’s conviction about the scale of the resource. “The center of gravity of world oil production,” he reported at the end of his mission, “is shifting from the Gulf-Carribean area to the Middle East—to the Persian Gulf area.” Another member of DeGolyer’s team summed it up more simply: “The oil in this region is the greatest single prize in all history.”1



ONE QUARTER OF WORLD RESERVES

The decades that followed proved these predictions on a massive scale. On the western side of the Gulf, towering over all the other exporters, is Saudi Arabia, with about a fifth of the world’s proven oil reserves. Its output averaged 8.2 million barrels per day in 2010—almost 10 percent of total world production. It has the capacity to produce to 12.5 million barrels per day. It also has the great advantage of having the lowest production costs in the world. Although in recent years, Saudi Arabia’s costs for exploration and production have risen, they are still well below those of most other regions in the world.

As a matter of ongoing policy, Saudi Arabia maintains a cushion of 1.5 to 2 mbd of spare capacity that can be brought quickly into production. That extra capacity is meant to be a stabilizer—or what Saudi petroleum minister Ali Al-Naimi calls an “insurance policy”—to counteract “unforeseen supply disruptions” in the global oil market, such as “wars, strikes, and natural disasters.” It is the producer’s analogy to a Strategic Petroleum Reserve.2

Almost the country’s entire industry is operated by the state-owned Saudi Aramco, by far the world’s largest oil company. Saudi Aramco, which took over operations from the consortium of U.S. companies that had developed the oil industry prior to nationalization, has established itself at the forefront in terms of its technical capability and in its capacity to execute large-scale, complex projects.

Saudi Aramco still has a substantial portfolio of untapped fields and reservoirs, with over 100 fields that contain nearly 370 reservoirs. It produces from only 19 of the fields, albeit the largest and most productive among the discovered fields, the largest of which is Ghawar. The development of three new mega-projects—Shaybah, Khurais, and Manifa—is adding over 2.5 million barrels a day of capacity, which just by itself would rank as a major OPEC exporter. The application of new technologies continues to unlock resources and open up new horizons. The part of Saudi Arabia that is heavily explored is relatively small. The company has committed close to $100 billion for investment in the oil sector for the five-year period, 2011–15, including new exploration in the northeast of the country and the Red Sea, aimed at increasing its oil and gas reserves.


The Quest

THE GULF

Sixty percent of conventional oil reserves are located in the Gulf.


The other major Arab producers are strung out along the western shore of the Persian Gulf. But Kuwait and Abu Dhabi, which is the largest member of the United Arab Emirates, each produce about 2.3 million barrels per day; Qatar pumps 0.8 mbd. Oil and gas have given these countries the wherewithal to play a major role in the world economy well beyond hydrocarbons. Significant amounts of their export earnings go into their sovereign wealth funds, which have become among the largest pools of capital in the world. Lesser amounts of oil are produced by Dubai and Bahrain and, on the southern end of the Arabian Peninsula, Oman and Yemen. Algeria and Libya are the main producers in North Africa.



THE “HINGES” OF THE WORLD ECONOMY

Al Qaeda has targeted what it has called the “hinges” of the world’s economy—its critical infrastructure. However, when Al Qaeda first emerged in the 1990s, energy systems, specifically, were not targets. In his 1996 statement, “Declaration of War Against the Americans Occupying the Land of the Two Holy Places,” Osama bin Laden argued against attacking oil infrastructure in the Middle East, which, he said, embodied “great Islamic wealth” that would be needed “for the soon-to-be-established Islamic state.” The attacks that did take place were aimed at foreign interests.

Then a new jihadist work appeared in 2004 that called for a change in strategy. Titled “The Laws of Targeting Petroleum-Related Interests and a Review of the Laws Pertaining to the Economic Jihad,” it proclaimed the oil industry a legitimate target so long as certain “rules” were followed. Long-term oil production capability should not be damaged. That needed to be preserved for the Islamic caliphate. But it advocated conducting operations that would drive up the price of oil, thus hurting Western countries.

Several months later Bin Laden, embracing this new doctrine, urged attacks on oil targets as part of an economic jihad against the United States. He cited the war in Afghanistan, which had “bled Russia for 10 years until it went bankrupt and was forced to withdraw from Afghanistan in defeat” and called for the same kind of policy “to make the US bleed profusely to the point of bankruptcy.” He later declared that the West sought to dominate the Middle East in order to steal oil and urged his adherents “to give everything you can to stop the greatest theft of oil in history.” He called for terror attacks that would drive oil to $100 a barrel with the aim of bankrupting the United States. In 2005 Ayman al-Zawahiri, Bin Laden’s deputy, declared that the mujahedeen should “focus their attacks on the stolen oil of the Muslims,” in order to “save this resource” for the time when an Al Qaeda caliphate would rule the Arabian Peninsula.

A raid in September 2005 on a safe house near the largest Saudi oil field discovered the practical tools for this new doctrine: charts and maps for the oil infrastructure not only of Saudi Arabia but of the other Gulf Arab oil producers as well. The Saudis were taken aback by how detailed the information was.3



A CRITICAL NODE

On a Friday in February 2006, shortly after afternoon prayers, three vehicles—a Toyota Land Cruiser SUV and two pickup trucks—made their way toward a little-used service gate at the vast Abqaiq processing plant, 60 miles from Saudi Arabia’s largest oil field. Abqaiq is one of the most critical nodes in the global supply system. Up to 7 million barrels of oil—8 percent of total world supply—pass through this sprawling industrial facility every day.

Once at the gate, the gunmen jumped from the Land Cruiser and started shooting, killing the guards, while the two pickups rammed through the fence and into the Abqaiq facility. One of the pickup drivers apparently took a wrong turn and ended up in the dead end of a parking lot. His engine, leaking oil, stalled. At that point, with nowhere to go, the driver detonated his bomb, committing suicide and destroying his vehicle. Meanwhile, the second pickup driver, trying to outrun pursuing security guards, was barreling down the road so fast that, by the time he detonated his bomb, killing himself, he had already driven past his target, and the resulting explosion did no damage to the facilities.

But the shooters escaped in the Land Cruiser and raced back to Riyadh, where they holed up in a small compound in the eastern part of the city. Police kept them under surveillance for a few days and then moved in. In the ensuing shoot-out, the jihadists were killed. One of them, it was discovered, was among the most wanted terrorists in Saudi Arabia. Inside the compound, the authorities found a trove of terrorist tools.

The Abqaiq facility is so big and spread out that even if the suicide drivers had been more adept, the damage would have been localized. Moreover, the Saudis maintain several levels of security at Abqaiq and other sensitive installations. Nevertheless, the attempt demonstrated the intent of the jihadists. In the aftermath of the Abqaiq attack, the Saudi government moved to further enhance security, including the creation of a new 35,000-man force specifically charged with protecting the kingdom’s oil infrastructure. In the years since, the jihadists further codified their doctrine of economic warfare. This was most obvious in the constant attacks on the oil infrastructure in Iraq. In 2008 an Arabian affiliate of Al Qaeda reiterated the call for attacks on the oil infrastructure. In July 2010 a suicide bomber in a small skiff, apparently taking off from an isolated part of Oman’s coast, rammed into a large Japanese oil tanker. Though little damage was done, it was the first such attack inside the strait itself.

For their part, the Arab oil-exporting countries along the Gulf have, in general, substantially deepened security, hardened targets, and much honed their intelligence operations. “The terrorists have begun to focus on disrupting our energy infrastructure,” Petroleum Minister Ali Al-Naimi said after the attempt at Abqaiq. “The threat from terrorists to the world’s energy infrastructure is not limited to any one country or region. We must all be vigilant.”4

In May 2011, Osama bin Laden was killed by U.S. Navy Seals in a villa in Pakistan. He had lived there, hidden with no Internet connection, for several years, just 35 miles from Islamabad, Pakistan’s capital. His communications with Al Qaeda were by couriers. Among the materials seized in the raid were plans for attacking oil tankers.



THE SOCIAL FOUNDATIONS

In December of 2010, Mohammed Bouazizi, a young fruit vendor in the Tunisian town of Sidi Bouzid, reached the breaking point. For years, the police had been harassing him and stealing his fruit, along with that of the other vendors in the fruit market on the main street. When he tried to stop a policewoman from stealing two baskets of apples, two other policemen held him down while the policewoman slapped him. He went to the city hall to complain but was told to go away. He did leave but returned shortly after and, standing in front of the municipal building, set himself ablaze. He died a few weeks later in the local hospital.5

But footage of protests over his fate and the way he had been treated was quickly posted on Facebook. The government did not know how to block the footage. Bouazizi’s self-immolation set off a blaze that burned across the Middle East, shaking the political order and bringing down part of the geostrategic structure of the region.

Bouazizi’s plight was the match that ignited the kindling whose accumulation had been building up for years: A huge bulge in the number of young people for whom educational options were limited and for whom there were no jobs, no prospects, no economic opportunity; pervasive corruption, lack of political participation, overwhelming and inefficient bureaucracies, and low quality of government services; a “freedom deficit” and a “women’s empowerment deficit”; arbitrary political power, secret police, and permanent “states of emergencies”; economic stagnation and enormous obstacles to entrepreneur-ship and initiative.6

All these were the factors that set in motion what has been called the “Arab Spring” among young people who had also reached the breaking point. It quickly gained momentum. Massive street demonstrations toppled the long-ruling government in Tunisia.

The protest movement spread to Egypt, where, day after day, hundreds of thousands of people packed into Tahrir Square in Cairo to demand the resignation of President Hosni Mubarak, who had ruled Egypt for 30 years. All of this played out on television and the Internet. The Arab world was transfixed, for Egypt plays a unique role in the region. It is a quarter of the total Arab population, and its influence reaches throughout the area. As one Saudi said, “We were all taught by Egyptians.” It had also signed a treaty with Israel, and a kind of cold peace existed between those two former belligerents. Egypt’s size—and the scale of its armed forces—make it the foundation of the geostrategic balance of the region. Finally, on February 11, 2011, Mubarak gave up power. The nature of Egypt’s future government would have great significance for the entire Middle East.

The events in North Africa triggered protests and demonstrations across much of the Middle East. Syria was racked by constant protests against the Assad government, which were met with bullets. Three countries of particular significance to the Gulf were Iran, Bahrain, and Yemen. Iran used whatever force was necessary to put down demonstrations. In Bahrain, the longtime tense relationship between the Sunni elite and the majority Shiite population make it a proxy for contention between Saudi Arabia and Iran. It is a very small country in terms of population but it is only a couple of dozen miles by causeway from Saudi Arabia and the world’s largest oil field. It is also the home of the U.S. Fifth Fleet, the mission of which is to maintain freedom of the seas in the Gulf. When protests turned into protracted violence, the Gulf Cooperation Council, led by Saudi Arabia, sent troops into Bahrain to help restore order.

Yemen was particularly vulnerable because of its strong tribal tensions and regional splits, the 33-year rule of the autocratic Ali Abdullah Saleh, its low per capita incomes, and what is thought to be the strongest Al Qaeda affiliate. Adding to the significance of what happens to Yemen is its position on the narrow Bab el-Mandab choke point, the entrance into the Red Sea, and its rugged 1,100-mile border with Saudi Arabia. The specter of chaos and violence in Yemen leads some Saudis to talk about the threat of having “our Afghanistan” on its frontier.

Altogether, unfolding events throughout the region had demonstrated that social instability had become a critical factor for energy security. In Libya protests turned quickly into a civil war that divided the country between rebels in the east and Ghaddaffi forces in the west. As Ghaddaffi’s forces advanced on Benghazi and what seemed likely to be a bloodbath, the Arab League called for a no-fly zone, and U.S. and European forces, operating under U.N. and NATO authorization, intervened on the side of the rebels.

By March of 2011, virtually all of Libya’s oil production was disrupted, removing about 1.5 percent of supplies from the market. But that, combined with rising demand, started to narrow once again spare capacity. As unrest and turmoil continued in the Middle East, anxiety rose about the potential for further disruptions to supply. Oil prices surged once again both on the actual disruption and on fear of “what would happen next,” taking the Brent price at least for a time, toward $130 a barrel. The rising oil prices were now seen as the biggest risk to global economic recovery. And, as long as there was uncertainty about the Middle East, oil prices would reflect the risk premium. Thus, the social foundations and the now uncertain geostrategic balance of the region would prove to be crucial in the formation of world oil prices, which in turn would have much wider impact.

Yet there is no single answer to how the uncertainty will be resolved. The differences among the countries in the region are very great. Egypt, like Iran, has about 80 million people, and per capita income in Egypt is about $5,800 a year. By contrast, many of the key oil producers have small populations; depend on a large number of expatriates to make their economies work; and are, in effect, cradle-to-grave welfare states with high per capita incomes.

What all the countries share, whatever their differences, is an enormous youth bulge. About a third of the population in the region is between the ages of ten and twenty-four. Historians have observed, going back to the European revolutions of 1848, the link between such bulges and turmoil and upheaval. In addition, what these countries lack is jobs, especially for frustrated and educated young people. Unemployment may range as high as 30 percent, and many of those who are not unemployed are underemployed. In addition to disappointed expectations and economic difficulties, the mass lack of employment feeds smoldering resentment against the governing system for all the reasons already noted.7

What made the critical difference was the galvanizing power of new communications technologies, which eroded the control of information that is so essential to authoritarian regimes. The development of Arab satellite networks, beginning in the 1990s, was already bringing both views of the outside world and domestic news that was not censored by the ministries of information. For many, these networks became the most important source for news. But then cell phones and the Internet—in particular e-mail, Facebook, and Twitter—provided a way to share information, mobilize for action, and outwit the traditional instruments of control. Lack of political participation was offset by participation through these new channels, as social networks came to challenge the traditional prerogatives of national sovereignty.8

It has been recognized for years that creating opportunity and jobs is a challenge in much of the Middle East, owing both to rapid population growth and the nature of the economies. This need has now gone from chronic to acute. But industries like oil and gas and petrochemicals are capital intensive; that is, they create good jobs but not a lot of jobs. This is where countries face the risk of the resource curse and the structural problems of the petro-state. That applies even to the wealthy petro-state that can provide cradle-to-grave welfare. These industries are so big and so dominant that an entrepreneurial economy gets squeezed out. Subsidies can ease the tensions, but they are not a substitute for job creation.

THE MIDDLE EAST YOUTH BULGE

Percentage of the population 29 years old or younger in 2011


The Quest

Source: U.S. Census Bureau


But jobs, on a large scale, cannot be created overnight. That takes both higher economic growth rates and time, along with openness, stimulation of entrepreneurship, reduced regulation and control, and dampening down of corruption. China and the other countries of East Asia have created jobs by intensively integrating with the global economy. Taiwan and South Korea were at the same stage of development as Egypt in the 1960s. Now Taiwan and South Korea export more to the world economy in two days than Egypt does in a year. But opening to the world economy brings with it the forces and values of globalization, which in the Middle East are seen as threatening and are resisted, sometimes fiercely, and often with religious exclusions. This stagnation leaves the young—especially young men—with no jobs and often no spouses, no homes of their own, alienated, and nowhere to go.9 The potential of political participation brings the possibility of moving beyond stagnation. But the expectations for economic improvement are way ahead of how fast economies can actually change and generate opportunity. So the hopes and optimism of the Arab Awakening will have to contend with the disillusionment that comes with the uncertain pace of economic improvement.



IRAQ’S POTENTIAL

For decades, Iraq’s potential to rank among the very top producers has been recognized—along with the fact that it was producing well below its potential. By 2009, six years after the U.S.-led invasion, and a after years of violence and sabotage, output was almost back to the 2001 level of 2.5 million barrels per day. The postwar government realized that it needed enormous investment and technology transfer from outside the country, and starting in 2009 it held bidding rounds for a number of fields. As would have been expected, the winners included oil companies from all over the world. Surprisingly, however, U.S. companies were notably underrep resented. Iraq was asking among the stiffest terms of any oil-exporting country, and a number of the U.S. companies could not make the economics work.10

Some of the projections bruited about for Iraqi output are exceedingly optimistic. To make the leap from 2.5 million or 3 million barrels per day to 12 million barrels a day, as one Iraqi minister had suggested, seems almost impossible. Much more reasonable is that by 2020 Iraq could be around 6.5 million barrels per day.

Yet even that lower target faces considerable obstacles and uncertainties: Development on such a scale requires political stability and physical security for the oil fields and pipelines and loading terminals. There needs to be a political consensus about the need for international investment and the fiscal terms so that the whole effort is not undone by subsequent changes in the rules of the game. These risks are further compounded by the sheer logistical complexity of delivering people, services, skills, and equipment—and the building of pipelines and export facilities—in a country that was technologically shut off from the global industry for decades. The companies that are investing recognize these risks. But they also see the potential and have concluded that it would be too risky to find themselves sidetracked from what may be one of the biggest oil opportunities of the twenty-first century.11

One further obstacle could well stand in the way of the steady development of Iraq’s resources: Iran. And that may be the most important of all. Iran regards any substantial expansion in Iraqi output as a threat because that could lead to lower oil prices. From a geopolitical point of view, Iran does not want Iraq to supplant it as the second-largest producer in the Gulf and in OPEC. Tehran made this clear in 2010 when Iraq decided, based upon the bids and new exploration, to raise its estimated oil reserves from 115 billion barrels to 143 billion. Iran waited hardly a week to leapfrog back over Iraq, lifting its own reserve estimates from 138 billion to 150 billion barrels.12

The longer-run question is to what extent Baghdad will come under the lasting sway of Tehran. Although Iraq is at least 75 percent Arab, and Iran is primarily Persian and Azeri, religion and religious authority tie Shia Iran together with the majority Shia population of Iraq. Since 2003 Iran’s deep involvement in Iraq, and its support of various groups, has not been a secret. Moreover, geography is inescapable. As one Iranian official told a U.S. diplomat, “Eventually, you will have to leave Iraq. But we’re not going away.”



SEEKING HEGEMONY

For decades, under the rule of the shah, Iran had competed with Saudi Arabia to be the dominant oil producer in the Gulf. In the 1970s Iran tried to do more—to take on the role of “regional policeman” of the Gulf and fill the security vacuum created by the withdrawal of the British military umbrella from the region in 1971. The ambitions were suspended by the Iranian Revolution of 1978–79 and then by the eight-year Iran-Iraq War.

Iran’s oil production had peaked under the shah at six million barrels per day; it plummeted to as low as 1.3 million barrels per day during the Iran-Iraq War, and in recent years it has fluctuated around four million barrels per day. But given the country’s petroleum reserves, the Iranian industry also produces well below its potential. It has been hamstrung by a host of factors: political battles among the factions ruling the country; lack of investment; the tough and painful way in which Iran negotiates with international companies; and, in more recent years, international sanctions that have sharply reduced its access to technology and finance. All this has hampered the development of the industry. Moreover, it has to import about 25 percent of its gasoline to make up for a shortage of refining capacity at home.

While Iran has the second largest conventional natural gas reserves in the world and is a founding member of the newly formed Organization of Gas Exporting Countries, it exports negligible quantities of gas, and only to immediate neighbors. In fact, it actually has to import some gas to make up for its domestic shortfall.



“THE GREAT SATAN”

In the first months of the Iranian Revolution in 1979, it was not clear whether the new regime would be reformist or fundamentalist. But the path was clearly set when militants stormed the U.S. Embassy in November 1979 and took 66 U.S. diplomats hostage, holding them until January 1981. The country’s new leader was the stern cleric Ayotollah Ruhollah Khomeini, who had returned to Iran after 15 years of exile. Khomeini and his followers used the seizure of the hostages—and the immediate cleavage it created with the United States—to consolidate power and eliminate effective opposition to the new theocratic fundamentalist regime. At one point, in a “letter to clergy,” Khomeini wrote, “When theology meant no interference in politics, stupidity became a virtue.” In the new Iran, ultimate political power lay in the hands of mullahs and, specifically, the Supreme Leader, Ayatollah Khomeini.13

Khomeini’s hatred for the shah, who had exiled him in 1963, was matched by his hatred for Israel, and for the United States. America as the implacable enemy—the “Great Satan”—became one of the organizing principles of the Islamic Republic and indeed a backbone of its legitimacy, critical to holding together the apparatus of control. The U.S. support for the 1953 coup that toppled the nationalist prime minister Mohammad Mossadegh and brought back the shah was a powerful historical memory that the fundamentalists could manipulate, and that story became part of the catechism of Iranian politics.

In the early 1990s, with the war with Iraq over, Iran resumed its revolutionary campaign. It stepped up its efforts to subvert other regimes along the Persian Gulf, fostered terrorism, targeted U.S. interests, and embarked on a military buildup. The hand of its clandestine Qods forces, the international arm of the Revolutionary Guards, could be seen in terrorism around the world. By 1993 Iran had earned the sobriquet of “the most dangerous sponsor of state terrorism.”14



NORMALIZATION?

Khomeini died in 1989. He was succeeded as Supreme Leader by one of his acolytes, Ali Khamenei, who had been president for eight years and who embraced the hard line of his predecessor.

Yet at various moments, glimmers of normalization appeared. The marketoriented president Hashemi Rafsanjani thought that a reduction in tensions with the United States was in Iranian interests and that commercial relations was the way to begin. That seemed to accord with the Clinton administration’s new policy of using economic engagement to improve relations with adversaries. Tehran sought to communicate its signal through oil. Iran deliberately awarded the first contract to a foreign company since the revolution not to a French oil company, but to an American one—Conoco.

Under U.S. sanctions policy, no Iranian oil could be imported into the United States, but it was legal for an American oil company to do business in Iran. For three years Conoco had been negotiating with Iran for rights to develop two offshore oil and gas fields. The two sides finally signed the deal on March 5, 1995, in the dining room of a government guesthouse that had formerly belonged to a Japanese auto company. In factionalized Iranian politics, a deal with an American company was a considerable victory for Rafsanjani. The contract could not have been signed without the approval of the Supreme Leader, Ayatollah Ali Khamenei. But that approval must have been very reluctantly given. For Khamenei deeply hated what he called the “Great Arrogance”—the United States—which he declared wanted to impose its “global dictatorship” on Iran. In his worldview, as he once said, that “enmity with the United States” was essential to the survival of the regime.15

The internal struggle within the Iranian leadership may well be why Conoco did not know, almost to the last moment, whether it would win the contract. The competitor, the French company Total, was told that Iran had chosen an American company to send a “big message.”16

Conoco executives had briefed State Department officials a couple of dozen times over the course of its negotiations with Iran, but those briefings turned out to be insufficient. Members of Congress attacked the deal with fury. Secretary of State Warren Christopher, who years earlier had led the arduous negotiations for the release of the American hostages, now denounced the oil deal as “inconsistent with the containment policy.” He added that in the Mideast, “Wherever you look you find the evil hand of Iran.” The deal did not even survive two weeks. On March 15, 1995, President Clinton signed an executive order forbidding any oil projects with Iran. The deal was seen in Washington not as an opening, an opportunity for economic engagement, but rather in the context of Iran’s support for terrorism, exemplified vividly in the attack on a Jewish center in Buenos Aires several months earlier that had killed 85 and wounded hundreds of others. Moreover, at that time, the United States was trying to persuade other countries to restrict trade with Iran.17

With Conoco abruptly forced to withdraw, the deal went instead to Total. Subsequently, at an OPEC meeting in Vienna, Gholam Reza Aghazadeh, Iran’s then oil minister and a Rafsanjani man, summoned two American journalists to his suite in the middle of the night. Speaking in a slow, gravelly tone amid the shadowy light, he talked about the now failed deal and asked, “What is it that I don’t understand about America? Tell me what I don’t understand about America.” Why had the United States rejected the opportunity to open a door? The answer was that, whatever the signal, the door could not be opened; terrorism made economic engagement impossible. Soon after, a 1996 terrorist assault in eastern Saudi Arabia, which was apparently engineered by Iran’s own Hezbollah, killed 19 U.S. servicemen and injured another 372. That seemed to seal the door even more tightly shut.18

But then in 1997, unexpectedly, some possibility of normalization emerged with the overwhelming—and totally unanticipated—electoral victory of Mohammad Khatami as president. A cleric, Khatami was a reformist who wanted to move toward what has been called a “proper constitutional government.” He was also an accidental president, having previously been dismissed as minister of culture for being too lenient toward the arts and the film industry, and then relegated to an insignificant position as head of the national library. His presidential victory seemed to represent a rejection of the harsh theocracy by a large majority of the public. After his election, he reached out to the United States with words about a “Dialogue of Civilizations.” After some delay, Washington positively reciprocated with encouraging words of its own, including a call by President Clinton for an end to “the estrangement of our two nations.”19

It was difficult, however, to assess how to deal with a Tehran in which power was divided between the president and the Supreme Leader. A coalition of hardline clergy, Revolutionary Guards, security services, and judiciary—all under the control of the Supreme Leader—mounted a determined campaign of violence and intimidation to block Khatami’s reforms, neutralize his presidency, limit his flexibility on foreign policy, and undercut his chances for achieving some degree of normalization.20

Thus it was all the more surprising when, in the immediate aftermath of 9/11, Tehran stepped forward to provide limited support for the U.S. campaign in Afghanistan. The Iranians saw the Taliban as an immediate and dangerous enemy that mobilized Sunni religious fervor against Iran’s own Shia religious zeal, and it was an enemy that the United States was prepared to eliminate. Iran provided intelligence about the Taliban, urged the U.S. to move faster to attack the Taliban, cooperated militarily in some ways, and collaborated in establishing a provisional post-Taliban government. For the first time since the revolution, Iranian and American officials met regularly face-to-face. In the third week of January 2002, at a conference in Tokyo on Afghan economic reconstruction, Iranians approached the U.S. Treasury Secretary Paul O’Neill and James Dobbins, the most senior U.S. diplomat at the meeting, and suggested wider negotiation over “other issues.”

But several days earlier, the Karine A, a freighter carrying fifty tons of Iranian arms to Gaza, had been intercepted in the Mediterranean. The message was conveyed that Khatami and his circle did not know about the shipment. But for Washington, the Karine A had a much bigger impact than Tehran’s diplomatic probes. The ship and its cargo further confirmed Iran’s commitment to terrorism. It also came at a critical moment in the definition of policy.

A week after that exchange in Tokyo, President George Bush delivered his State of the Union Address. It was the first since 9/11, and it was a call to mobilization in a new struggle, the war on terror. Bush’s defining phrase was the “axis of evil,” which was deliberately meant to echo the 1930s’ axis of Nazi Germany, fascist Italy, and Japan. This new axis included Iraq and North Korea. Iran, the archenemy of Iraq, was the third. The phrase “axis of evil,” with its clear implication of “regime change,” undercut those in Tehran who wanted some détente with the United States and largely squelched the unusual U.S.-Iranian collaboration on Afghanistan—but not quite. In Geneva, at another Afghan donors’ meeting, a senior Iranian general from the Revolutionary Guards suggested to the Americans that Iran could still work with the United States, including training 20,000 Afghan troops under U.S. leadership. He added that Iran was “still paying the Afghan troops your military is now using to hunt down the Taliban.” 21

Moreover, some dialogue was resumed during the early phase of the Iraq War, when the United States removed Saddam Hussein, Iran’s main regional enemy and the biggest obstacle to the expansion of its influence.



RENEWED MILITANCY

Whatever door to dialogue that might have existed was firmly closed with the 2005 election of Mahmoud Ahmadinejad as president. The former mayor of Tehran and a civil engineer by training, with a doctorate in traffic management, he had been a Revolutionary Guard and remained closely aligned with the guards. That he was determined to return to an aggressive and militant path was made clear by his continuing fusillade of rhetoric. The 9/11 attacks, he told the United Nations, were probably “orchestrated” by elements in the U.S. government “to reverse the declining American economy and its grip on the Middle East.” The mission of Iran was “to replace unworthy rulers” and ensure that the whole world embraces Shia Islam. He threatened that Iran would “wipe Israel off the map”—or in another translation, be “erased from the page of time”—a slogan that also adorned missiles during military parades.22

With Iraq demolished as its regional rival, Iran communicated its ambition to dominate the Gulf. In December 2006, at a meeting in Dubai of a regional group, the Arab Strategy Forum, Ali Larijani, the sometime Iranian nuclear negotiator and later speaker of the Parliament, told his Arab audience that America’s time in the Middle East was finished, it would be leaving, and that Iran would assume the leadership of the region. But, he pledged, Iran would be guided by the principle of “good neighborliness.” The stony-faced Arab audience was clearly not thrilled by the prospect of being under the stewardship of their Iranian neighbor.23



THE STRAIT OF HORMUZ

For many years, both oil-consuming and -exporting countries have been concerned about the security of the Strait of Hormuz, through which ships pass on their way from the Persian Gulf on to the high seas and on to world markets. Twenty-one miles across at its most narrow, the Strait is the number one choke point for global oil supplies. About 20 tankers pass through it daily, carrying upward of 17.5 million barrels of oil. This is equivalent to 20 percent of world oil demand—and 40 percent of all the oil traded in world commerce. On the northern shore of the strait is Iran. The southern shore belongs to Oman and the United Arab Emirates.24

The strait is also a target for Iranian threats. “Enemies know that we are easily able to block the Strait of Hormuz for an unlimited period,” one Revolutionary Guard general has warned. Strategists argue, however, that Iran’s ability to disrupt the strait is more limited than its rhetoric. The physical characteristics and geography of the strait and its environs would limit the effectiveness of Iran’s arsenal of cruise missiles; mines; submarines; and small, high-speed, explosive-packed boats. Any attacks would be met with overwhelming military force, including from the U.S. Fifth Fleet, which is headquartered in Bahrain and whose primary mission is to maintain freedom of the seas in the region. Moreover, an assault on the flow of oil today would be an attack not just on the West, as might have been the case two decades earlier, but also on the East, including China, which gets about one quarter of its oil from the Gulf. Here is one strategic point where U.S. and Chinese interests as consumers coincide. An effort to disrupt or close the strait would be seen as an assault on the world economy and would likely stimulate a global coalition, as happened in response to Iraq’s invasion of Kuwait in 1990.25

In addition to all this, any effort to stem the flow of oil would be very costly for Iran itself. Iran depends on the strait to export its own oil, which generates about $80 billion in earnings and about 60 percent of its budget. Unlike other Gulf countries, Iran does not have the financial reserves that would enable it to easily withstand any cessation of export earnings.

To be sure, attacks on shipping and efforts to disrupt the flow through the strait would very likely panic markets and cause prices to spike, at least initially. And there are many oil assets that could be targeted within the Gulf. But any effort to block the Strait of Hormuz would probably fall well short of the kind of catastrophe sometimes feared.



THE GAME CHANGER

But what really threatens to upset the balance of power in the Gulf—and thus the security of world oil—is Iran’s pursuit of nuclear weapons. Iran’s initial nuclear program, launched in the 1950s on a minor scale by the shah under America’s Atoms for Peace, was aimed primarily at developing atomic power. It was driven more intensively in the 1970s by the shah’s conviction that Iran’s oil and gas resources would be exhausted within three decades.26

In the mid-1980s, amid the Iran-Iraq War, the Khomeini regime made the decision to seek nuclear weapons capability. It obtained know-how and technology from the Pakistani A. Q. Khan network. In 2002 a dissident Iranian group revealed that Iran was secretly developing the capability to produce enriched uranium. Under pressure from the Europeans, Iran temporarily halted its enrichment program in 2003.

After his election, Ahmadinejad restarted enrichment. Iran’s repeated assertion that its nuclear program is for peaceful purposes is met with total disbelief by its Arab neighbors. Ahmadinejad has also accelerated the development of missiles, some of which could carry nuclear payloads. The nuclear program entered a new phase in 2006 with the activation of a large number of centrifuges to enrich uranium. Enrichment is the process by which the ratio of the U-235 isotope to the far more common U-238 is increased. A 3 percent to 5 percent U-235 concentration is required to provide the fuel for a civilian nuclear reactor. A 20 percent level is needed for medical purposes. An atomic bomb needs 90 percent. It is much easier, once having reached the 20 percent level, to go from 20 percent to 90 percent than it is to go the initial distance from 3 percent to 20. In 2010 Iran announced that it had reached the 20 percent level. This was not long after the discovery by Western intelligence of a secret enrichment facility near the holy city of Qom.

Iran claims that the enriched uranium is exclusively for its civilian nuclear program. Its first large nuclear reactor at Bushehr went online in 2010, with more plants supposed to follow. Iran’s nuclear power program will take many years to develop and will be very costly. Yet Iran is rich in natural gas, and it is to gas that many other countries are turning as one of the most desirable and low-cost fuels for electric power. This mismatch between Iran’s rich hydrocarbon resources and its plans for atomic energy—and the haste to enrich uranium—reinforces the Arab and Western conviction that it is pursuing nuclear weapons.



THE BALANCE OF POWER

An Iran with nuclear weapons would change the balance of power in the Gulf. It would be in a position, to borrow a phrase that Franklin Roosevelt had used prior to World War II, to “overawe” its neighbors. It could assert itself as the dominant regional power. Iran could directly threaten to use the weapons in the region—or actually use them—although the latter would likely trigger a massive and devastating response. But such weapons would also provide it with a license to project its power and influence with what it might regard as impunity throughout the region—both directly and through its proxies. On top of all of that, Iran, as a hegemonic nuclear power, would likely try to more directly assert dominance over the flow and price of oil, displacing the Saudis. In short, Iranian possession of such weapons would, at the very least, create insecurity for the region and for world oil supplies.

Many governments fear that elements in the Iranian government would, if they have not already done so, go into the proliferation business and provide fissile material to other governments, to its proxies like Hezbollah in Lebanon, or to terrorist groups.

When all is added up, the assessment of the impact of a nation’s acquiring nuclear weapons depends not only on the possession of the weapons themselves but also on the intentions of those who hold them. And that is why the rhetoric from Tehran would take on new significance were Iran to have those weapons. Ahmadinejad has said that the ultimate mission of the Islamic Republic is to prepare the way for the return of the Hidden Imam, who disappeared in the ninth century but whose reappearance will be necessarily preceded by a period of violent chaos and fiery war that will culminate in “the end of times”—and that this moment is imminent. When the Mahdi returns, Ahmadinejad has added, he will destroy the unjust “who are not connected to the heavens”—which means the United States, the rest of the West, and Israel—and lead survivors to “the most perfect world.” All this can only increase the deep anxiety about his finger being anywhere close to the nuclear button.

Adding to the danger is the lack of communication with Tehran, which could increase the likelihood of an “accidental” nuclear confrontation. Even during the tensest moments of the Cold War, the United States and the Soviet Union had communication channels, including, after the Cuban Missile Crisis in 1962, the “hotline” between the White House and the Kremlin to assure immediate contact during a crisis. No such channels exist with Iran. Indeed, there is very little understanding of how the regime functions, who makes decisions, and how the factions compete for power. All this adds to the risk. The lack of understanding also extends to the Gulf Arab states. The great worry, observed a leader of one of the Gulf nations, “is not how much we know about Iran, but how much we don’t.”27

The alarm among the other Gulf countries, as well as in Israel, about Iran’s objectives has been rising in direct proportion to Iran’s progress toward nuclear weapons capability. They fear that Iran will become more and more aggressive in seeking to assert its dominion over the region and in trying to destabilize other regimes. As one Saudi put it, “They want to dominate the region, and they express it strongly and clearly.” Many of the Arabs believe that intermittent “negotiations” is a standard Iranian tactic to create a cover while it proceeds with its nuclear program—what one official described as “their usual strategy” of “leading you on with false promises, designed to buy more time.”

Some Gulf Arabs are convinced that Iran is pursuing a strategy of encirclement, from its presence in Iraq and subversion among the Shia populations in Bahrain and eastern Saudi Arabia and in Yemen to promoting insurgency on Saudi Arabia’s southern border to financing and supplying weapons to Hezbollah in Lebanon and Hamas in Gaza. This encirclement would pressure the Arab Gulf states and, at the same time, put assets in position that Iran could activate during some future time of tension or crisis.

For years, the Israelis have spoken of a nuclear Iran as an “existential threat” to the very survival of their nation and its people. Now some Arabs also describe Iran as an “existential threat.” As a leader of one of the emirates put it, his country is only “46 seconds from Iran as measured by the flight time of a ballistic missile.”28



INCENTIVES AND SANCTIONS

The United States and Europe have been trying for several years to find a mix of policies sufficient to persuade Iran to stop short of the red line—nuclear weapons capability—and thus avoid a situation where another country concludes that it has no choice but preemptive military action. The offers include expanded trade, membership in the World Trade Organization, and—recognizing the broad public embrace in Iran of a nuclear program—support for the development of peaceful atomic energy in Iran under an acceptable international regime. At the same time, they have mounted an increasing array of sanctions, both under the United Nations and unilaterally, that restrict investment, trade, and the flow of finance. In addition to their general impact on the economy, these sanctions have put pressure on Iran by retarding the modernization of Iran’s conventional military forces and by greatly constraining international investment in Iran’s oil and gas industry and Iran’s access to international finance and capital markets.

Sabotage is another way, short of military action, of slowing Iran’s progress toward the red line. In 2010 a sophisticated Stuxnet computer virus was introduced into the software programs running the centrifuges, causing them to speed up, perform erratically, and self-destruck. Israel, the United States, or possibly a European country is considered the most likely author.

After intense negotiation, Russia and China have supported the United Nations sanctions but not the unilateral sanctions. As Western oil companies wound down and backed out of Iran in the face of the unilateral sanctions, Chinese companies—not governed by those sanctions—have signed a variety of large oil and gas deals with Iran that would, if implemented, bring much of the technology and investment that the Iranian industry needs. Yet at the same time, China does have many other interests, including avoidance of a conflict in the Gulf that would disrupt oil and gas supplies coming out of the region. While a number of major contracts have been signed, the Chinese companies have been moving slowly to act on them.

An alternative to conflict is a policy of containment, which would use sanctions and other restrictions to hold Iran in check until such time as Iran concludes that the advantages of real negotiations outweigh the purported benefit of nuclear weapons—or until the political situation in the country changes. That, after all, is what containment meant when George Kennan propounded it in 1947, at the beginning of the Cold War, when he outlined “a policy of firm containment” designed to confront “the Soviet Union with inalterable counterforce at every point” and increasing “the strains under which Soviet policy must operate”—until a settlement was possible or until the “seeds of its own decay” brought down the Soviet Union.29

This kind of containment would also involve the extension of guarantees, nuclear shields, and extended deterrence to other nations in the region. The prospect of a nuclear Iran has already ignited a conventional arms buildup in the region. The reality of a nuclear Iran could well provoke a nuclear arms race, which, by the very numbers of countries involved, would increase the chances of such weapons actually being used. The nuclear standoff in the Cold War, despite the grave risks, had a certain stability. It was essentially between two parties, each of whom understood the meaning of deterrence and the second-strike capability of the other side. And neither wanted to risk suicide. The deterrence of the Cold War is not necessarily a good analogy at all for the highly unstable—and not very predictable—situation that a nuclear Iran would create.30



What then might reduce the risk and encourage Iran to stop somewhere short of the red line ? It could be a combination of containment and external pressure, economic difficulties within Iran, and widespread domestic discontent that foments a political change. The potential for change was vividly demonstrated by the overwhelming victories of the reformist Khatami in 1997 and 2001, and then the mass “Green” protests after the bitterly contested and much-disputed reelection of Ahmadinejad in 2009. But in all those instances, the tools of violence and repression, wielded by the religious establishment and the powerful Revolutionary Guards and their allies, demonstrate how strong is the resistance and the determination to defend the system now in place. This leaves the unnerving risk that nuclear weapons would be in the hands of those who are bent on overturning the regional and international order and who believe in the necessity of an apocalypse to usher in a “perfect world.”

The whirring of the centrifuges may also be the ticking of a clock. The timing as to when Iran would cross a red line in its nuclear program is uncertain, as is the response of those who feel most threatened by it. Sometimes it is said to be two years away. But containment and other measures may stretch out the time by a few more years. Still, as one senior official from the region put it, “Whatever the time frame, time is running out.”

Here is one of the preeminent risks for regional security and the world’s energy security, and one that inescapably becomes part of the calculations for the energy future.


15

GAS ON WATER

From the moment they left Doha, the capital of Qatar, the cars took just a little over an hour speeding on a new four-lane highway that crossed the desert with tight curves. This desert motorcade carried members of the Qatari royal family; senior officials from the government and from RasGas and Qatargas, the country’s two gas-exporting companies; along with a range of other dignitaries, including bankers and executives from the international companies that are Qatar’s partners in the greatest concentrated natural gas development the world has ever seen.

The cars slowed as they passed through several gates where identifications were checked again and again. A little distance off, rising, as though a mirage in the desert, was a huge assortment of pipes and machinery, the nearest part half assembled with tall cranes, and the rest arranged in neat lines, stretching down across the sand. Beyond all this, on the other side of the road, was the sea.

Out there, below those waters of the Persian Gulf, was the North Field, one of the world’s major energy assets. But it ends abruptly. For some forty miles off this placid coast is an imaginary demarcation line, invisible except on maps, on the other side of which is Iran and, specifically, its offshore South Pars Field. In political terms they are two separate fields. In geological terms, they are one and the same. But still, North Dome by itself constitutes the largest conventional natural gas field in the world. The median line between the two countries was negotiated before the gas field was discovered, and Iran has never been happy that it does not have a larger share.

Once out of their cars, the group was ushered into a huge tent, filled with chairs. After everyone was seated, there was a stir. The emir, Sheikh Hamad bin Khalifa al-Thani, swept in, a big, husky man in a dishdasha. He paused to shake hands and kiss people. Next to him was Abdullah bin Hamad al-Attiyah, deputy prime minister and at the time minister of petroleum. For many years, al-Attiyah’s true vocation had been natural gas, and he had driven this development. Everyone was there to celebrate an industrial feat: the building of a massive new LNG train—as the facilities for transforming natural gas into a liquid at very cold temperatures are called—ahead of schedule and on budget. Another notch for one of the largest production facilities of any kind anywhere in the world.



Qatar is a mostly flat, sandy, stony peninsula that juts out from Saudi Arabia a hundred miles into the Persian Gulf. Through the nineteenth century, Qatar had been under the overlapping rules of the Ottoman Empire, the neighboring island of Bahrain, and Great Britain, which sought to maintain its influence in the Persian Gulf in order to protect the routes to India. Qatar itself managed to eke out a livelihood from fishing and pearl diving. After a military clash between Bahrain and Qatar tribesmen, a merchant family from Doha, the al-Thanis, emerged as the ruling clan. With the collapse of the Ottoman Empire at the end of the First World War, Qatar became a British protectorate; it did not gain full independence until 1971, when the British withdrew their military presence from east of Suez.

At that time, Qatar was still a poor country. No longer. In recent years, its economy has been growing at a furious pace—some years reaching double digits. Today Qatar has the highest per capita gross domestic product in the world and has become one of the main commercial hubs of the Persian Gulf. At the same time, this small principality of about 1.5 million people (of which at least three quarters is composed of foreigners with temporary residence status) also rivals Russia to be the Saudi Arabia of world natural gas. For Qatar has emerged as the central player in what is becoming, after oil, the world’s second global energy business—natural gas, specifically liquefied natural gas, or LNG. This corner of desert at the very edge of the Arabian Peninsula, just two decades ago mostly dunes, was now well on its way to being one of the strategic junctures in the world economy.


The Quest

NORHT FIELD AND SOUTH PARS: QATARAND IRAN’S OFFSHORE GAS FIELD

The world’s biggest gas field, shared with Iran, has enabled Qatar to become the largest LNG exporter.

Source: IHS CERA


Qatar is also a key element in the larger mosaic of the world natural gas market. Not so many years ago, there were three distinct gas markets. One was Asia, mainly fed by LNG. The second was Europe, with a mix of domestic gas, long-distance pipeline gas, notably from Russia, plus some LNG. And North America, with virtually all gas delivered by pipeline. Each had its own distinctive pricing system. But then the development of LNG, represented most notably by Qatar, appeared to be tearing down the walls. The markets looked like they were coming together and would eventually be integrated into a single global natural gas market in which prices were converging. That seemed irreversible—until a major innovation in the United States made it reversible.



After the inaugural ceremonies, the emir boarded a minibus to tour the new facility. The bus crossed the sand and then turned into the site. It was like driving into a dense forest, but one that was not damp and whose colors were not varieties of green but rather silver and steel glinting under the dry desert sun. For this forest had none of the vagaries of nature but rather was an intricately planned maze of interconnected pipes and towers and turbines and, occasionally, what looked like huge white Thermos bottles. That image was appropriate enough since the liquefaction train was in effect a giant-sized refrigerator, into which was pumped the natural gas from the North Field, after it had been scrubbed and cleaned of impurities. There, through a facility that stretched more than a half mile, the gas would step by step be compressed and refrigerated. It would come out the other end as a liquid that could be pumped into ships and transported around the world. And it was a very expensive forest. Adding up all the trains together, some $60 billion of engineering and hardware has been compressed into this small area in a remarkably short number of years.

This train—70,000 tons of concrete, 440 kilometers of electric cable, 13,000 metric tons of piping—was one stage in the great complex at Ras Lafan, which in its entirety is the single largest node in the expanding global LNG business that involves more and more countries. The growing list of LNG suppliers ranges from Malaysia, Indonesia, and Brunei in Asia; to Australia; to Russia (from the island of Sakhalin); to Qatar, Oman, Abu Dhabi, and Yemen in the Middle East; to Algeria, Libya, and Egypt in North Africa, and Nigeria and Equatorial Guinea in West Africa; to Alaska; to Trinidad and Peru in the Western Hemisphere. Other countries may join in the queue, including Israel, after a major new gas discovery offshore that could turn the Eastern Mediterranean into a new frontier for gas development.

This global expansion of LNG is a very big business. Projects today can easily run $5 billion or $10 billion—or even more—and take five to ten years to complete. The Gorgon prospect in Australia is budgeted at $45 billion. Altogether, the price tag for LNG development worldwide could add up to as much as half a trillion dollars over the next fifteen years.

Yet the very possibility of this huge global LNG business derives from a single physical phenomenon—that when natural gas is compressed and brought down to that temperature of −260°F, it turns into a liquid, and, as such, takes up only 1/600th of the space it occupies in its gaseous state. That means it can be pumped into a specifically designed tanker, shipped long distances over water, and then stored or re-gasified and fed into pipelines and sent to consumers.

But very few of the participants in this business today would know that the industry owes its existence to someone whose fascination with LNG long predated theirs.



CABOT’s CRYOGENICS

Just after World War I, Thomas Cabot, a graduate of both Harvard and the Massachusetts Institute of Technology, had headed down to West Virginia to sort out a natural gas pipeline business owned by his father, Godfrey, who, to Thomas’s distress, had lost all interest in it. Returning to Boston, Thomas found that he had other pressing family business to attend to—keeping his father from going to jail. It turned out that Godfrey had had no use for the federal income tax, which Woodrow Wilson had signed into law in 1913, and for the next several years Godfrey had simply not bothered to pay it. “Income is only a matter of opinion,” he would say to government agents. In return, the Internal Revenue Service had expropriated Godfrey’s bank accounts.

While wrestling with this problem, Thomas had some time on his hands, and he started writing a scientific paper that related to one of his father’s other failed ventures. This concerned cryogenics—the study of very low temperatures, at which various gases turn into liquids. During the First World War, Godfrey Cabot had built a plant in West Virginia to liquefy natural gas and patented a design. “My father had dreamt of liquefying components of natural gas,” Thomas Cabot later said. As a business, however, it had proved to be a total bust.1

Cryogenics was based on the work of Michael Faraday, who in the 1820s had used cold temperatures to turn gases into liquids. In the 1870s the German scientist Carl von Linde had done further work on refrigeration. His research attracted interest from brewing companies, which, along with their customers, decidedly liked the idea of cold beer. Linde was soon supplying the brewers with refrigerators. He later patented processes for liquefying oxygen, nitrogen, and other gases at very low temperatures and making them available on a commercial scale. His work provided the basis for practical applications of cryogenics.

It was back to his father’s dream of liquefying natural gas that Cabot turned, while also fending off the IRS. Cabot specifically wanted to explore how extreme refrigeration could be used during the summer season, when demand was low, to compress natural gas into a liquid, enabling it to be held in storage and then returned to its gaseous state in winter, when demand was high.

Cabot’s father, who rarely demonstrated positive responses to anything his son did, showed his characteristic lack of interest in his son’s paper. Seeking to interest someone, Cabot passed it to the chief engineer of a natural gas pipeline company who was “intrigued to the greatest possible extent” by the idea of compressing natural gas in order to store it. But it was not until 1939 that the first pilot plant was built.

During World War II, in order to meet the energy needs of factories working two or three shifts a day to supply the war effort, the East Ohio Gas Company built an LNG storage facility in Cleveland. In October 1944 one of the tanks failed. Stored LNG seeped into the sewer system and ignited, killing 129 people and creating a mile-long fireball. Subsequently, the causes of the accident were identified: poor ventilation, insufficient containment measures, and the improper use of a particular steel alloy that turned brittle at very low temperatures. The design and safety lessons would be seared into the minds of future developers.2

After World War II, such interest as remained in LNG shifted from using refrigeration to store gas for consumers to quite a different purpose; instead, using it as a way to transport gas over water over long distances.



KILLER FOG

In December 1952 a killer fog gripped London, making it difficult for people even to find their homes, let alone breathe, killing thousands and making many more ill. The fog resulted from the interaction of weather conditions and coal smoke. Rapidly reducing the burning of coal and replacing it with cleaner fuels became a critical priority. The government-owned British Gas Council teamed up with an American company to import natural gas from Louisiana into Britain in the form of LNG. The first shipment to Britain, aboard the Methane Pioneer, arrived in 1957. This may have proved the concept, but importing LNG was a very small business. Yet demand, stimulated by a promotional campaign for “High Speed Gas,” was exceeding all expectations. If this new LNG business in the UK was going to get anywhere, it needed a much larger source of gas.

Royal Dutch Shell bought controlling interest in the nascent LNG company and started developing a large natural gas deposit in Algeria far out in the Sahara Desert. In 1964, two years after Algeria gained its independence from France, its first shipment of liquefied natural gas was loaded on a tanker in Arzew for a month-long , 1,600-mile trip to Canvey Island in the lower Thames. A few months later, another shipment left for La Havre in France.3

This was the real beginning of the international LNG trade. It demonstrated what would become the characteristic practice in the business. It is expensive to turn gas into liquid, transport the liquid, and then turn the liquid back into gas. These large costs require predictability about prices and markets. Thus the business model for LNG projects has traditionally involved long-term (often twenty-year) contracts among all the interested parties—countries, international oil companies, utility customers, and sometimes trading houses. They share overlapping ownership of tankers and liquefaction and regasification facilities. This model, quite distinct from the international oil business, would last a half century.

In the mid-1960s, Europe certainly looked poised to become a growing LNG consumer. But what might have been an LNG boom was abruptly stymied—by competitive gas that was cheaper and more accessible. In 1959 a huge gas field—at the time the largest in the world—had been discovered under the flat farmlands in Groningen in the northern part of the Netherlands. Then in 1965 natural gas deposits were also found in the British sector of the North Sea. With that, Britain made a wholesale shift to natural gas for appliances and heating. Subsequently, the Soviet Union and then Norway began to deliver growing volumes of natural gas, via pipeline, to Western Europe. LNG now had to compete in Europe.

Asia was a different story. Japan, in the midst of its amazing postwar economic boom, saw natural gas as a way to reduce the stifling air pollution produced by its coal-fired electric-generation plants. Lacking any significant gas or oil resources of its own, Japan turned to LNG. The first LNG arrived in Japan in 1969. The source was the United States—the Cook Inlet in southern Alaska, in a project developed by Phillips Petroleum. After the 1973 oil crisis, Japan was determined to reduce its dependence on Middle East oil and diversify its energy supplies. LNG, along with nuclear power, was a key part of the prescription. By the end of the 1970s, Japan was importing large volumes of LNG.4

As they entered their economic miracle phases, both South Korea and Taiwan—two other hydrocarbon-poor countries—also became major LNG importers. All the projects followed the original model, based on overlapping long-term contracts. Because the imported gas was replacing not only coal but also oil in electric generation, the LNG price was indexed to oil prices, meaning that the price of LNG followed oil’s.



THE “FUEL NON-USE ACT”

The natural gas industry in the United States was very different. Natural gas, produced like oil, had become an important energy resource but a largely local one. During World War II, when gasoline was rationed and fuel shortages in the fighting theaters were a constant threat for the Allies, President Franklin Roosevelt urgently wrote to his secretary of the interior: “I wish you could get some of your people to look into the possibility of using natural gas. I am told that there are a number of fields in the West and Southwest where practically no oil has been discovered but where an enormous amount of natural gas is lying idle in the ground because it is too far to pipe to large communities.”5

But this had to wait until after World War II. It required the development of long-distance pipelines, stretching halfway across the country, in an industry for which “long distance” had heretofore meant 150 miles. Pipelines connected the Southwest to the Northeast, and New Mexico and West Texas to Southern California. Thus natural gas became a truly continental business, in which the main population and industrial centers were connected to gas fields that were far across the country. As the nation’s economy grew and suburbs rolled out around major cities, natural gas consumption increased at a rapid pace.

By the beginning of the 1970s, natural gas provided fully 25 percent of America’s total energy. It was produced either jointly with oil or from pure gas wells. But then a natural gas shortage gripped the country. In the cold winter of 1976–77, parts of the Midwest ran so short that schools and factories had to shut down. Companies were already scrambling to find new supplies. LNG looked to be a very good—and timely—answer. Several utilities, including the Cabot Corporation (the company Thomas Cabot had created in sorting out his father’s taxes) contracted with Algeria for supplies. The Texas-based pipeline company El Paso ordered enough LNG tankers to constitute a virtual floating pipeline. Receiving terminals for re-gasifying the liquid gas were built on the Gulf Coast and the East Coast. The most visible, designed to help meet New England’s gas deficit, was Cabot’s in Everett, Massachusetts, right across Boston Harbor from the USS Constitution (“Old Ironsides”), the famed frigate launched in 1797. Another big project was planned for the West Coast, at Point Conception, California’s elbow into the Pacific, north of Santa Barbara.

But it turned out the natural gas shortage was not an act of nature but manmade, the consequences of inflexible regulation. The federal government, which regulated natural gas prices, had set them at such an arbitrarily low level as to stifle supply. The obvious solution was to let the market determine prices. But what was straightforward economics was hardly the same when it came to politics. The single biggest domestic political battle during the presidency of Jimmy Carter was over natural gas price deregulation. “I understand now what Hell is,” Energy Secretary James Schlesinger said in 1978 amid the battle over natural gas pricing between House and Senate negotiators. “Hell is endless and eternal sessions of the natural gas conference.”6

Finally, the Natural Gas Policy Act of 1978 started to decontrol prices. The act was a wonderful example of what happens when economics and politics interact in the same test tube. It provided distinct pricing schedules for some 22 different categories of a commodity that, in molecular terms, was more or less all the same—one carbon atom and four hydrogen atoms. Still, the end point was pretty clear: deregulation.

As part of the compromise, Congress enacted the Fuel Use Act. But the law might as well have been called the “Fuel Non-Use Act” as it banned the burning of natural gas in power plants to generate electricity. Natural gas was deemed the “prince of hydrocarbons” and was to be kept for higher uses—heating and cooling, cooking, and industrial processes. It was too “valuable” to be used to make electricity.

To the surprise of some, markets actually worked. Deregulation of prices led to a surge in supplies. Moreover, as supplies increased, prices did not shoot through the roof but settled at lower levels. Indeed, so much additional natural gas came onto the market that it created an extended oversupply that was known as the gas bubble. After a time, it seemed that this was one bubble that would never burst.

The oversupply of low-cost domestic gas did in the prospects for LNG, for LNG was simply too expensive to compete. The expected boom in the U.S. LNG business turned into a bust. Projects were canceled; companies defaulted on contracts for LNG tankers. Companies that had committed to LNG teetered on bankruptcy. Cabot Corporation was losing $5 million on every cargo of LNG .7

Yet by the 1990s, the market was changing again. The fears of shortage had long since faded away, and the prohibition on using natural gas in electric generation was lifted. Instead of being banned, natural gas became the fuel of choice for electric power. New technologies made natural gas turbines much more efficient and thus lowered costs. Gas was seen as a cleaner, more environmentally attractive fuel than coal, and the development of new nuclear power in the United States had already come to a stop. In contrast, power plants fired by gas could be built more quickly and at a much lower cost than the competitors’.

By the mid-1990s the U.S. economy was booming, and, as a result, electricity demand was growing. To meet this demand, power generators were frenetically building natural gas-fired power plants. But where was the gas supply to come from? In response to rising prices, drilling did increase, but in contrast to the traditional pattern, new drilling brought forth only a paltry increase. It was proving harder to step up gas output from existing basins, which were said to be mature. Access to new areas was difficult, owing to increasing regulatory delays. Moreover, many prospective areas, both onshore and offshore, were closed off to drilling altogether for environmental reasons.

In the face of rising demand and flat supply, the market tightened. Consumers saw their bills increase dramatically. Even harder hit were energy-intensive industries, like petrochemicals. They could no longer compete against products from the Middle East that were made from far less expensive gas. Chemical plants were shut down in the United States. If supplies did not increase, and costs did not come down, companies would have to close even more of their U.S. plants and lay off still more workers.

The answer once again seemed to be LNG. Innovation had made it available by attacking costs at dramatically increasing scale. Cabot, which only a few years earlier had been desperately trying to extricate itself from unviable LNG contracts, now started to look for new LNG supplies.

One possible source was Trinidad, where significant natural gas reserves had been discovered offshore. But could gas from Trinidad be competitively landed in the United States? “The conventional wisdom was that the cost of LNG was going to continue to rise,” recalled Gordon Shearer, who worked for Cabot at the time. “But then we realized that the cost structure of LNG didn’t make sense.” Cabot succeeded in bringing down these costs substantially by simplifying designs and promoting much more competitive bidding.8

Trinidad demonstrated that LNG need not be the high-priced alternative but rather could compete with conventional pipeline gas. By 1999 this cheaper LNG was starting to flow in growing volumes into the terminal at Everett, near Old Ironsides, across the bay from Boston.



“THE CROWN JEWELS”

But then there was Qatar.

The North Field was discovered by Shell in 1971 in the waters off Qatar. At first no one knew how vast it was; indeed, it took decades for the full dimensions to be recognized. Today its reserves are estimated at 900 trillion cubic feet. This makes the State of Qatar the third-largest resource owner of conventional natural gas in the world. Ahead of it are only Russia and Iran, whose South Pars Field is really the same structure as the North Field.

In the 1970s and 1980s, there was no obvious market for the North Field gas, no demand for it, and no way to get it to market. Eventually, Shell relinquished the North Field and moved on to the more immediately attractive Northwest Shelf project in Australia.

In 1971, the same year that Shell discovered Qatar’s North Field, Mobil Oil discovered Arun, a huge offshore natural gas field in the northern part of Sumatra, the largest of the 17,000 islands that comprise the nation of Indonesia. As billions of dollars flowed into the project, Arun turned into the largest LNG development of the 1970s and 1980s. The onshore liquefaction plants were in the province of Aceh, and the supplies went to Japan. The project was absolutely crucial to the fortunes of Mobil and its profitability. “It was the crown jewels, no question,” recalled one Mobil executive.9

But a problem emerged—Arun’s output appeared set to decline. Thus, with increasing urgency, Mobil searched for another supply of natural gas, unreachable by pipeline and thus stranded from markets, where its LNG skills could be applied. North Field stood out; Shell was now gone, and a discouraged BP had just pulled out of an LNG project there that existed only on paper. Mobil proposed a structure that would allow it to take a share in two Qatari companies, Qatargas and RasGas. This kind of structure made sense to the Qataris, especially as RasGas did not yet exist as a company. They did their deal.

The new partnership needed to find customers, but it was very hard going. “We weren’t able to do much,” recalled one of the Qatari marketers.

Every decade or so, however, Japan sought to add another major source of LNG not only to meet demand but also as part of its diversification strategy. Chubu Electric, which serves the territory next to Tokyo and whose biggest customer is Toyota, contracted for the first gas from the North Field. A Korean utility, Kogas, signed on next.

With these deals, Qatar had gotten in the door in Asia, the biggest LNG market in the world. But Qatar was a latecomer, and it ran a real risk that it would be relegated to a secondary position as a supplemental supplier. And Qatar had too much gas for that. But where else could they go? Finally, after a couple of years of study and debate, a senior Qatari settled the matter: “We should be heading west,” he said. That meant to Europe—and beyond.10

During this same period, Qatar was going through political change that would reinforce its commercial drive. In 1995 Crown Prince Hamad bin Khalifa al-Thani sent a message to his father, the emir, Sheikh Khalifa bin Hamad, then vacationing in Switzerland. It was actually a pretty simple message: Don’t bother coming back. The crown prince had just deposed his father, who had been in power since overthrowing his cousin in 1972 and was not seen as a very competent ruler. Nevertheless, Sheikh Khalifa had insisted on being in charge of everything. Indeed, it was said that he personally signed all checks over $50,000. He was also thought to have been bleeding the country of revenues, and indeed, after the bloodless coup in 1995, the new emir, Sheikh Hamad, sued his father for return of the state’s money. That case was settled out of court, and the aged father found a new life for himself based in London.11

Now in power, Sheikh Hamad initiated a far-reaching program of modernization and reform, ranging from permitting women candidates in municipal elections to the opening in Qatar of the Mideast branches of New York’s Weill Cornell Medical School, Georgetown University’s School of Foreign Service, and Texas A&M University. Qatar became home to the forward headquarters for the U.S. Central Command, which has responsibility for the Middle East. It also became home to, and indeed financed, the Al Jazeera satellite news network.

The emir was determined to turn his small Persian Gulf principality into a global energy giant, based on LNG, with the revenue stream that would go with it. Accelerating LNG was the way to do that. But a huge amount of money would have to be invested. That meant that LNG costs—considered absolutely irreducible—had to be reduced. Even so, the capital costs would be enormous. “The more I learned about Qatar,” recalled Lucio Noto, former CEO of Mobil, “the more I realized the scale was beyond the capacity of an individual company.”12

The merger of Mobil with Exxon in 1999 made the great expansion much more doable. The combination brought critical Mobil assets—the gas resource, LNG expertise, and relationships—together with Exxon’s financial strength and its skill in project execution. The combined company now had the size and wherewithal to think big in terms of scale and risks. Actually, very big. And scale was the way to bring costs down—much bigger ships, much bigger liquefaction trains, and much bigger turbines. Projects were managed with great discipline, capturing the learning and bringing down the costs of subsequent projects. One way to do that was by making facilities as standard as possible, doing the design very carefully, and then sticking to it. As one of the senior managers put it, “The rule was no change orders.”

Hungry at the time for work, Korean shipyards tendered for much bigger LNG carriers—two times the size of those then afloat—at a very attractive price. RasGas accepted the bids. Higher volumes meant lower costs. Now, as they put it, Europe was “reachable.” The joint venture knew that it could compete against pipeline gas in Europe, and even beyond Europe. For with sufficient scale (and bolstered by the liquids with the gas) Qatar could deliver competitively priced gas anywhere in the world.

By 2002 Qatar had emerged as a potent new competitor in the global gas market. It could dispatch large amounts of LNG into any major market—Asia, Europe, and the United States. Breaking with the traditional business, it could also do so without necessarily being tied to a long-term contract. It built its own receiving terminal, in Europe. Qatar was at the forefront in creating a new business model in which both buyers and sellers were willing to buy or sell LNG without complete reliance on long-term contracts. And the numbers are huge: By 2007 Qatar had leapfrogged over Indonesia and Malaysia to become the world’s number one supplier of LNG, and this small emirate of 1.5 million people was on its way to being able to provide almost a third of the world’s LNG supply.

It was not just the physical resources and technical capabilities that projected Qatar into this premier position. It was also the result of what those on the other side of the negotiating table recognized to be efficient and determined decision making. Qatar could be very tough, but it was also intent on closing deals and getting things decided quickly, not in multiple years. As Minister Al-Attiyah put it, “If we do a deal one day, we don’t wait, we sign it the next day.” Reliability was one of the critical pillars on which the Qatari industry was built. Once a deal was done, stability of contracts underpinned confidence and facilitated investment. The importance of this approach was made clear by comparison to the other side of the median line, off the coast of Qatar, where Iran after forty years has yet to be able to turn South Pars gas into exports.13

By the 2000s, it seemed that natural gas, carried around the world on tankers, was on its way to becoming a truly global industry. Historically, due to the high cost of transporting gas over long distances, natural gas has been traded regionally. By bringing down costs so significantly, this no longer applied.

What this meant was vividly demonstrated in July 2007. On July 16, 2007, a large earthquake hit central Japan, damaging the Kashiwazaki-Kariwa Nuclear Power Station—the world’s largest, home to seven reactors. The entire facility was shut down, creating an immediate shortage of electric generation. The owner of the power station, Tokyo Electric Power Company (TEPCO), began buying heavily from the short-term LNG market to fuel stand-by natural gas–fired power plants that could make up for the nuclear power shortfall. LNG tankers intended for elsewhere immediately changed course on the high seas and headed for Japan. Also in that same month, July 2007, half a world away, outages of natural gas pipelines flowing from the natural gas fields in the North Sea interrupted supplies to Europe. This too triggered a quick diversion of LNG supplies from their intended routes.

Almost four years later in March 2011, a giant earthquake and tsunami shook Japan, knocking out power and setting off a major nuclear accident at the Fukushima Daiichi plant. Natural gas supplies were redirected to Japan on an even more massive scale.

What had been an inflexible regionally based LNG industry had turned into a flexible international business. Natural gas had become a global commodity. 14


16

THE NATURAL GAS REVOLUTION

George P. Mitchell, a Houston-based oil and gas producer, could see the problem coming. His company was going to run short of natural gas, which would put it in a very difficult position. For it was contracted to deliver a substantial amount of natural gas from Texas to feed a pipeline serving Chicago. The reserves on which the contract depended were going down, and it was not at all clear where he could find more gas to replace those depleting reserves. But he did have a strong hunch, piqued by a geology report that he had read.

That was in the early 1980s. Three decades later, Mitchell’s relentless commitment to do something about the problem would transform the North American natural gas market and shake expectations for the global gas market. Indeed, the stubborn conviction of this one man would change America’s energy prospects and force recalculations around the world.

The son of a Greek goat herder who had somehow ended up in Galveston, Texas, Mitchell had grown up dirt-poor. He had worked his way through Texas A&M University waiting tables, selling candy and stationery, and doing tailoring for his fellow students. After World War II, Mitchell had started in the oil-and-gas business in Houston, working out of a one-room office atop a drugstore. Over the years he had built it into a very substantial company, Mitchell Energy and Development, that focused much more on natural gas than oil.

For Mitchell, natural gas was virtually a cause. He was such a believer that when he suspected someone of speaking too kindly of coal, he would reach for the phone and set him straight in a few short sentences. What he wanted to see was more natural gas use. And he simply would not accept the notion that supplies were constrained by scarcity.

But where was he going to get more gas? The geological report that he had read in 1982 pointed to a possible solution. For a very long time it had been recognized that natural gas was to be found not only in productive reservoirs but also trapped in hard, concretelike shale rock. This shale rock served as the source rock, the “kitchen,” where the gas was created, and also as the cap that sat on top of reservoirs that prevented the gas (and oil) from leaking away.1

Gas could certainly be extracted from shale rock. In fact, it is thought that the very first natural gas well in the United States, in Fredonia, New York, in 1821, drew from a shale formation. The problem was the economics. It was inordinately difficult and thus very expensive to extract gas from shale. It just was not anywhere near commercially viable. Yet maybe it was possible with the right mixture of technological innovation and persistence.

Mitchell’s “laboratory” was a large region called the Barnett Shale, around Dallas and Fort Worth, Texas, which sprawled under ranches, suburbs, and even Dallas–Fort Worth International Airport. Despite Mitchell’s efforts, the Barnett Shale proved continuously unforgiving. Mitchell insisted that his engineers and geologists keep plugging away in the face of ongoing disappointment and their own skepticism. “George, you’re wasting your money,” they would say to him over the years. But when they raised objections, he would reply, “This is what we’re going to do.”2

Fortunately, something of a carrot was available, what was called Section 29. This was a provision in the 1980 windfall profits tax bill that provided a federal tax credit for drilling for so-called unconventional natural gas. Over the years, that incentive did what it was supposed to do—it stimulated activity that would otherwise not have taken place. In the 1990s, the tax credit mainly supported the development of two other forms of unconventional natural gas, and gas from tight sands, the very name of which conveys the challenge.



“FIGURE A WAY”

But even with the incentive of the Section 29 tax credit, producing commercial-scale shale gas—another form of unconventional gas—was proving so much more difficult. In addition to Mitchell, a few other companies were also tackling the problem, but they became discouraged and dropped out. In 1997 the only one of the major companies working on shale gas development efforts in the Barnett region shut down its office. Only Mitchell Energy and a few other smaller independents were left. George Mitchell would just not give up. “It was clear to him that Barnett held a lot of gas, and he wanted us to figure a way to get it out,” recalled Dan Steward, who led the development team. “If we couldn’t, then he would hire other people who could. He had a way of getting things out of people they might not know they could deliver on.”

The introduction of 3-D seismic much improved the understanding of the subsurface. Still, Mitchell Energy had not yet cracked the Barnett’s code. “All sorts of experienced, educated folks,” said Steward, “wanted to bail out of the Barnett.”

Indeed, by the late 1990s the area was so much off the radar screen that when people did forecasts of future natural gas supplies, the Barnett did not even show up. Mitchell Energy’s board of directors was becoming increasingly skeptical. After all, when almost two decades of effort were added up, it was clear that the company had lost a good deal of money on the Barnett play. But George Mitchell would not give up; he insisted that they were getting closer to cracking the Barnett’s code.3



BREAKTHROUGH

Fraccing—otherwise known as hydraulic fracturing—is a technique that was first used at the end of the 1940s. It injects large amounts of water, under high pressure, combined with sand and small amounts of chemicals, into the shale formation. This fragments underground rock, creating pathways for otherwise trapped natural gas (and oil) to find a route and flow through to the well.

Mitchell Energy had been experimenting with different methodologies for fraccing. By the end of 1998, the company finally achieved its breakthrough: it successfully adapted a fraccing technique—what is known as LSF, or light sand fraccing—to break up the shale rock. “It was the trial-and-error approach that Mitchell Energy used that ultimately made the difference,” said Dan Steward.

George Mitchell recognized that developing the Barnett was going to take a lot of capital. He had also been at it as an independent for sixty years and that was a long time. He had other interests; he had developed the Woodlands, the twenty-five-thousand-acre new community north of Houston. He put Mitchell Energy up for sale. Three other companies looked at the company but they all decided, after due diligence, to pass. It appeared to all of them that while Mitchell’s pursuit of shale gas, fraccing included, may have been an interesting idea, it was a commercial flop.

The team at Mitchell went back to work on the shale, further developing its capabilities, deepening its understanding—and producing a lot more natural gas.

One of the companies that had passed was another independent, Devon Energy, from Oklahoma City. But in 2001, its CEO, Larry Nichols, noticed a sudden surge in gas supply from the Barnett Shale area. “I challenged our engineers as to why this was happening,” said Nichols. “If fraccing was not working, why was Mitchell’s output going up?” The answer was clear: Mitchell Energy had indeed cracked the code. Nichols did not waste any more time. In 2002 Devon acquired Mitchell Energy for $3.5 billion. “At that time,” added Nichols, “absolutely no one believed that shale drilling worked, other than Mitchell and us.”

Devon, for its part, had its own strong capabilities in another technology, horizontal drilling, which had begun to emerge in the 1980s. Advances in controls and measurement allowed operators to drill down to a certain depth, and then drill on at an angle or even sideways. This would expose much more of the reservoir, permitting much greater recovery of gas (or oil) from a reservoir.

Devon combined the fraccing know-how (and the team) it had acquired from Mitchell with its own skills in horizontal drilling. All that required a good deal of experimentation. Devon drilled seven such wells in 2002. “By 2003,” said Nichols, “we were becoming very confident that this drilling truly worked.” Devon drilled another fifty-five horizontal wells in the Barnett that year. It did work.4

Shale gas, heretofore commercially inaccessible, began to flow in significant volumes. Combining the advances in fraccing and horizontal drilling is what would unleash what became known as the unconventional gas revolution.

Entrepreneurial independent oil and gas companies jumped on the technology and quickly carried it to other regions—in Louisiana and in Arkansas, and Oklahoma, and then to the “mighty Marcellus” shale that sprawls beneath western New York and Pennsylvania down into West Virginia.



THE “SHALE GALE”

Something was very strange about the numbers. As they rolled in for 2007 and then 2008, they showed something unexpected that did not make sense—a sudden surge in domestic production of U.S. natural gas. How was that possible ? Where was that coming from? The United States was supposed to be facing a sharp decline in domestic production—for which LNG was the only sure answer. Then it started to become clear: a technological breakthrough was beginning to make its impact felt. The rest of the industry now realized that something new was happening. And that included both major oil and gas companies, which had heretofore been more focused on big international LNG projects, and which was thought to be required to offset the apparent shortfall in North American natural gas.

Over the next few years, the output of shale gas continued to increase. Some now started to call it the “shale gale.” As the supply increased and skills were further developed, costs came down. Shale gas was proving to be cheaper than conventional natural gas. In 2000 shale was just 1 percent of natural gas supply. By 2011 it was 25 percent, and within two decades it could reach 50 percent.

The shale gas transformed the U.S. natural gas market. Perennial shortage gave way to substantial surplus, which turned the prospects for LNG in North America upside down. Just a few years earlier, LNG had seemed destined to fill an increasing share of the U.S. market. Instead it became a marginal supply rather than a necessity. Electric utilities, remembering gas shortages and price spikes, had been reluctant to use more natural gas. But now, with the new abundance and lower prices, lower-carbon gas seemed likely to play a much larger role in the generation of electric power, challenging the economics of nuclear power and displacing higher-carbon coal, the mainstay of electric generation. As a source of relatively low-priced electric power, it created a more difficult competitive environment for new wind projects. Shale gas also began to have an impact on the debate on both climate change and energy security policy. By the beginning of this decade, the rapidity and sheer scale of the shale breakthrough—and its effect on markets—qualified it as the most significant innovation in energy so far since the start of the twenty-first century. As a result of the shale revolution, North America’s natural gas base, now estimated at 3,000 trillion cubic feet, could provide for current levels of consumption for over a hundred years—plus. “Recent innovations have given us the opportunity to tap larger reserves—perhaps a century’s worth—in the shale under our feet,” President Obama said in 2011.5 The potential here is enormous.6

At the same time, the rapid growth in shale gas has stoked environmental controversy and policy debate. In part, demographic differences have brought the controversy to the fore. Lower-density states like Texas are accustomed to energy development, and encourage it as a major source of income for the population and revenues for the state government. Residents in more populated eastern states, like New York and Pennsylvania, are not accustomed to drilling in their region (although Pennsylvania is certainly long experienced with coal mining and was the birthplace of the oil industry. While some welcome the jobs, royalties, and tax revenues, others are taken aback by the surface disruption and the sudden increase in large truck traffic on what had been quiet country roads.

But, more than traffic, the environmental debate is centered on water. Critics warn that fraccing may damage drinking water aquifers. The industry argues that this is highly unlikely, as the fraccing takes place a mile or more below drinking water aquifers and is separated from them by thick layers of impermeable rock. Moreover, the industry has a great deal of experience with fraccing: more than a million wells have been fracced in the United States since the first frac job six decades ago. Fraccing uses small amounts of chemicals; the general trend now is to disclose those chemicals.

Although most of the discussion is about fraccing, the biggest issue has become not what goes down, but what comes back—the water that flows back to the surface. This is the “flow back” from the fraccing job, and then the “produced water” that comes out of the well over time. This water needs to be handled properly, managed, and safely disposed.

Three things can be done with the flow back and produced water. It can be injected into deep disposal wells; it can be put through treatment facilities; or it can be recycled back into operations. In traditional oil and gas states, the wastewater has often been reinjected. But the geology of Pennsylvania does not, for the most part, lend itself to reinjection. And so water that cannot be recycled has had either to be put through local treatment facilities or trucked out of state.

Aboveground management of waste has to keep pace with the rapid development of the shale industry. New large-scale water treatment facilities are being developed. The industry is now recycling 70 to 80 percent of the flow back. There is also intensive focus on innovation. These include developing new methods to reduce the amount of water going in and to treat the water coming out, and the drilling of more wells from a single “pad” to reduce the footprint.

A more recent concern is “migration”—whether methane leaks toward the surface and into some water wells as a result of fraccing. This is a controversial subject. Methane has been found in water wells in gas-producing regions but there is no agreement on how this can happen. Some cases of methane contamination in water wells have been tied to shallow layers of methane, not the mile-deep deposits of shale gas where fraccing takes place. In other cases, water wells may have been dug through layers of naturally occurring methane without being adequately sealed. It is difficult to know for certain because of a lack of “baseline” data—that is, measurements of a water well’s methane content before a shale gas well is drilled in the neighborhood. Gas developers are now routinely taking such measurements before drilling begins in order to establish whether methane is preexisting in water aquifers. A new question concerns whether there are significant “fugitive emissions” or whether those emissions are captured.

One other subject of controversy is regulation itself. Some argue that drilling is an unregulated activity. In fact, the entire drilling process—including the water aspects—is heavily regulated by a mixture of state and federal agencies. The states are the primary regulators of drilling, including hydraulic fracturing as well as all other activities inherent in the production of oil and gas. While the federal government has ultimate authority over water treatment and disposal, it has delegated its authority to many states whose own regulations meet or exceed federal standards. The next few years will see much argument about whether the federal government should have more responsibility. There will also be much more research on the water issues, and continuing focus on advancing the technology, both for drilling and for environmental protection in areas where shale gas is produced.7

The shale gale had not only taken almost the entire natural industry by surprise; it also sent people back to the geological maps. Very large potential supplies of shale gas have been identified in the traditional energy areas of Canada, in Alberta and British Columbia, as well as in eastern Canada, in Quebec. Chinese oil companies, recognizing significant potential for shale gas as well as coal-bed methane, have signed agreements with Western companies to develop both. Altogether the base of recoverable shale gas outside North America could be larger than all global conventional natural gas discovered to date. But only a portion is likely to be developed. Even so, the next several years are surely to see a substantial addition to the world’s supply of natural gas.8



GLOBAL GAS

While shale gas is, thus far, a North American phenomenon in terms of large-scale production, it is already changing the dynamics of the global gas business. For its emergence as a new supply source coincided with a rapid buildup of LNG. In 2010 Qatar celebrated reaching 77 million tons of LNG capacity—28 percent of the world total. Australia is emerging as a new LNG powerhouse, number two only to Qatar and well positioned to supply Asia—and to continue to expand. Altogether, between 2004 and 2012, the world’s LNG capacity will double. That means that what was accomplished in the first forty years of LNG development is being replicated in just eight years. But assumptions that helped underpin and drive this rapid buildup are now somewhat unhinged. The United States was supposed to be a major guaranteed market because of a projected domestic shortfall. But instead it is a marginal market.9

This puts much more LNG at sea, literally, in search of markets. Growing Asia will absorb a significant amount, more than most had anticipated a few years ago. But far from all. Thus, the immediate impact is on Europe, which is now the world’s number one contestable market. Freely available LNG, sold on a spot basis, can take some market share away from pipeline gas, whose price is, according to twenty-year contracts, indexed to more-expensive oil.

This not only creates greater competition among gas suppliers, pushing down price. It also has wide geopolitical impact, for it upsets a four-decadeold economic and political balance that has proved so durable that it even survived the upheaval that was set in motion by the collapse of the Soviet Union and the fall of communism. The new gas competition is central to the complex and evolving relationship among a much-expanded European Union, the Russian Federation, and the other newly independent states of the former Soviet Union, some of whom are now members of the European Union.

The development of the gas market in Europe is embodied in the web of pipelines that crisscross the European Continent. Look at the pipeline map from the 1960s, and all one sees are a few strands of string. Today such a map looks like a big bowl of spaghetti. Local gas markets had earlier developed in different parts of Europe. But the real European gas market only began with the development of the Groningen field in Holland in the 1960s, followed by the offshore oil and gas fields in the British sector of the North Sea.

In the 1970s, a new pipeline brought the first Soviet gas into Europe in the 1970s. It came with a strong geopolitical tone. West German Chancellor Willy Brandt signed the first Soviet gas deal in 1970 as a key element in his Ostpolitik, aimed at reducing Cold War tensions, normalizing relations with the Soviet Union and Eastern Europe, and creating some common interest between East and West. “Economics,” as Brandt put it, was “an especially important part of our policy.” He specifically, if indirectly, wanted to reestablish contact with communist East Germany, which had been cut off completely by the construction of the Berlin Wall in 1961. The dependence flowed in both directions; this gas trade, for the Soviets, became a major—and crucial—source of hard currency earnings.10

Over the years that followed, the gas business would be built up and managed by a handful of Western European transport and distribution companies joined together by 25-year contracts with the gas export arm of the Soviet Ministry of Gas Industry.



“WOUNDED BY A FRIEND”

By the early 1980s, major discoveries in West Siberia had propelled the Soviet Union ahead of the United States as the world’s largest gas producer. Inevitably, those big new supplies provided further impetus to sell more gas into Western Europe.

The Soviets and the Europeans began to plan for a large, new, 3,700-mile pipeline from the great Urengoy field in West Siberia. But before it was ever built, the proposed pipeline created a bitter rupture in the Western alliance, prefiguring the controversies over the geopolitics of European natural gas that continue to the present.

The Reagan administration became alarmed at the prospect of a much larger East-West gas trade. It had launched a major arms buildup to counter Soviet military expansion; the last thing the administration wanted was additional hard currency earnings from natural gas financing the Soviet militaryindustrial complex. It also feared that greater reliance on Soviet gas would create vulnerable dependence that the Soviets could exploit to pry apart the Western alliance and that—in a time of crisis—would give the Soviets crucial leverage. The American administration warned that the Soviet Union could use dependence on its natural gas to “blackmail” the Europeans by threatening to turn off the heat and stoves in Munich.11

The Reagan administration struck back at the proposed new pipeline. It imposed a unilateral embargo that prohibited companies from exporting the billions of dollars of equipment that was essential to the construction and the operations of the pipeline. It applied not only to U.S. companies, but to European companies whose equipment was based on U.S. technology.

The Europeans, however, were as determined as the Soviets to go ahead. They wanted both the diversification away from the Middle East and the environmental benefits from reduced coal use. They also wanted the revenues and the jobs, as well as the opportunity to expand their export markets in the Soviet bloc. Even Reagan’s closest ally, British Prime Minister Margaret Thatcher, looking at the loss of jobs in an area of Scotland with 20 percent unemployment, pushed back. Given her relationship with Reagan, she took the embargo very personally. “We feel particularly wounded by a friend,” she said. The British government ordered the British companies that had contracts with the Soviets to ignore the embargo and to go ahead and ship their goods. Moreover, it became apparent that the Soviets could replicate some of the supposedly proprietary technology, albeit at higher cost. Thus the embargo would only delay, not prevent, the new pipeline from going ahead.12

By the end of 1982, a solution would be found. The Western allies would very seriously “study” the problem in order to determine what would be a “prudent” level of dependence on the Soviet Union. After much discussion, the study eventually established a dependence ratio of 25 percent, which just happened to be higher than the share of Soviet gas even with the new pipeline. It was also understood that natural gas from a major new source, Norway’s Troll field, would begin to flow into European markets.

The Urengoy pipeline was indeed built, and the flow of Soviet gas into Europe more than doubled over a decade. Even when the Soviet Union collapsed, the gas continued to flow. In the 1990s the earnings from gas exports would prove a critical source of revenues for Russia as the government of Boris Yeltsin struggled to stay afloat in those difficult years.



THE EMERGENCE OF GAZPROM

Out of the Soviet collapse, and specifically out of the Ministry of Gas Industry, a new Russian gas company emerged: Gazprom. Eventually it would have private shareholders not only in Russia but around the world, and would become, for investors and fund managers, a proxy stock for the overall performance of the Russian stock market and economy. At one point, in mid-2008, Gazprom’s stock market capitalization catapulted to more than $300 billion, and it ranked as the third-largest company in the world by that measure, behind ExxonMobil and PetroChina.13

Gazprom remains just over 80 percent owned by the Russian state with which it is closely aligned and to which it pays taxes of one kind or another equivalent to about 15 percent of the total government budget. In many meetings with Western businessmen, Prime Minister Vladimir Putin has demonstrated a deep interest and an extraordinarily detailed knowledge about the gas business. For his part, Dmitry Medvedev, before becoming Russia’s president, was chairman of Gazprom. The company produces over 80 percent of Russia’s total natural gas output. It also has a monopoly over gas transportation within Russia and over all gas exports. It is, thus, Russia’s interlocutor with the global gas market. Gazprom, while retaining its primacy at home, has also been moving to become a global diversified energy company. That began with the establishment of a joint marketing company in Germany in 1993 with Wintershall, a German energy company.

By 2005 European gas supply appeared to be in political balance. Domestic European production was 39 percent; Russia supplied 26 percent; Norway, 16 percent; Algeria, 10 percent; and about almost another 10 percent from other sources, largely LNG. But by then the system that had created the European gas market was disintegrating, and many of the premises on which it had been built were progressively dissipating, creating new tensions and conflicts.

For one thing, Europe was going through great change. The European Union had grown to 27 members; the new additions being either former Soviet satellites or, in the case of the Baltic nations, former constituents of the Soviet Union. These new members have a high degree of dependence on Russian gas, but their energy relations are wrapped up in overall unsettled and sometimes tense relations with Russia.

The gas market was also changing in somewhat unpredictable ways. In order to promote “competition,” the European Union was seeking to break up the integrated companies that had helped build the market and move away from the stability of 25-year contracts that the companies had used as the building blocks. Instead the EU wanted to promote trading, hubs, and spot markets. But it was not clear how the next generation of expensive new gas fields in Russia (or elsewhere) could be developed without the guarantee of such long-term contracts. At the same time, gas supplies from the North Sea were declining. In addition, the dominance of pipeline gas could erode as increasingly large volumes of LNG sought entry into Europe.

And the Soviet Union was gone. The region through which the critical pipelines transited was no longer either part of the Soviet Union or its satellites, but rather independent countries. They were dependent on Russia for their gas, but history of Soviet domination weighed heavily on their relations.14 And Russia was dependent on them for access to the European market.



UKRAINE VERSUS RUSSIA

No relationship was more complex than that with Ukraine. Russia and Ukraine were bound together by history. The Russian state had actually been founded in Kiev, now the capital of Ukraine, and Ukraine had been part of the Russian Empire from 1648. Russian, and not Ukrainian, was the daily language of life in Soviet Ukraine. After independence in 1991, the country seemed to have a natural split: eastern Ukraine still looked to Russia; western Ukraine gravitated increasingly toward Europe.

Gas much complicated the new relationship between the two countries. Since the breakup of the Soviet Union in 1991, Russia and Ukraine had often been at odds, and sometimes rancorously so, over gas pricing and supply, and over the tariffs and indeed control of the crucial pipeline to Europe.

The victory by the Western-oriented Orange Revolution in the December 2005 Ukraine presidential election put the two countries on a path to confrontation. The Orange Revolution aimed at reducing Russian influence and reorienting toward Europe. The new president, Viktor Yuschenko, had, prior to the election, barely survived a mysterious poisoning with deadly dioxin, and he built much of his campaign on turning away from Russia.

Natural gas became the inevitable focus for rising tensions. Ukraine was heavily dependent on gas from Russia. It has the most energy-intensive economy in the world, three times more energy intensive than that of neighboring Poland. The previous government had negotiated a deal with Moscow that gave Ukraine the gas at a steep discount from the price charged to Western Europe. This was really a subsidy to the aged Soviet-era industrial infrastructure and one that was essential to keeping it competitive in world markets. For years, international institutions like the World Bank had been urging Ukraine to raise domestic gas prices to improve energy efficiency, but Ukraine had resisted from fear of the impact on its industries and on jobs.

In its relations with Russia, Ukraine had one trump—the pipeline network, which carried over 80 percent of Russia’s gas exports to Europe. Yuschenko had described this system as Ukraine’s “crown jewels,” and he had no intention of letting Russia gain control.15

But for Russia, greater control over those pipelines was a decisive objective, exactly because it was so central to its export position. Ukraine owed Russia billions of dollars in unpaid bills for gas. Moreover, it was buying gas at much lower prices than the Europeans. That might have been acceptable were Ukraine still aligned with Russia. But it was not. Therefore, Moscow asked why it should provide what was, in effect, a $3 billion–plus annual subsidy to a hostile Orange Revolution, thus depriving Gazprom and the Russian government of revenues that they would otherwise have. For months after Yuschenko became president, increasingly angry negotiations on gas prices dragged on between Gazprom and Ukraine, with no resolution. Complicating things further was the existence of a strange and nontransparent company called RosUkrEnergo, which appeared to control the flow of gas in and out of Ukraine.

At 10:00 a.m. on the cold winter Sunday of New Year’s Day, January 1, 2006, pipeline pressure suddenly began to go down at the border into Ukraine. Gazprom had begun to cut gas deliveries directed to Ukraine itself. Moscow immediately warned Ukraine not to siphon off any of the gas that was meant to flow on to Europe. Notwithstanding, Ukraine proceeded to do exactly that, and some shortfalls of gas became evident not only in Ukraine but also in Central Europe.

The showdown was resolved within a few days and the gas shipments resumed. But the shock waves reverberated across the entire Continent. Russian delivery of gas to some former constituents of the former Soviet Union had been disrupted at times of tension. But never in four decades had there been a decision that would disrupt supplies to Europe. Such disruptions as had occurred were the result of weather or technical malfunctions. Here now, it seemed to some, was concrete proof of the dangers of dependence that had animated the pipeline battle of the 1980s. “Europe needs a clear and more collective policy on the security of our energy supply,” said Andris Piebalgs, the EU energy commissioner. Austria’s economic minister was blunter: “Dependence on Russia should be reduced,” he declared. Over the next couple of years, natural gas became a heated subject of contention and suspicion between East and West. At one point, Alexei Miller, the CEO of Gazprom, told the Europeans, “Get over your fear of Russia, or run out of gas.”16

For their part, Russia and Ukraine had had further standoffs over natural gas pricing. Even the subsequent government of President Viktor Yanukovych, which had better relations with Moscow, still continued to describe its pipeline network as “our national treasure.”



DIVERSIFICATION

The lasting impact of the gas controversies was to fuel a new campaign of diversification on both sides of the argument. That meant a new round of pipeline politics that was elevated to the geopolitical level. The Russians were determined to get around Ukraine and Poland with a series of new pipelines. Gazprom and ENI had already built Blue Stream, which crosses the Black Sea from Russia to Turkey and is the deepest underwater pipeline in the world. They now bruited the idea of South Stream, which would cross the Black Sea from Russia to Bulgaria and deliver gas to Italy. Russia also launched a large new pipeline project, Nord Stream, in partnership with major Western European gas companies and chaired by former German Chancellor Gerhard Schröeder. Nord Stream travels under the Baltic Sea from near St. Petersburg to northern Germany.

But most contentious of all are the EU and European proposals aimed at bringing non-Russian gas to Europe by skirting Russia’s southern border and involving countries that were formerly part of the Soviet Union, countries that Russia continues to see as part of its sphere of influence. The European Union calls this the Fourth Corridor and emphasizes that it is not a challenge to Russia but just an appropriate diversification. Some European companies have combined to promote the Nabucco project. This odd name was borrowed from a Verdi opera that some of the original planners had seen one night while meeting in Vienna. Nabucco would pick up gas in Turkey and carry it all the way to Germany.

But where would the gas come from to fill the Fourth Corridor pipeline system? That is the central question and a source of great uncertainty—in terms of price, availability, and reliability—and politics. It could be from Turkmenistan, which has immense resources but has made exporting east to China its number one priority. It could be Azerbaijan, but it has its own plans. The gas resources in Kurdistan, in northern Iraq, could potentially be very large, but both the politics and security situation are very unsettled. The transit fees across Turkey need to be reasonable, both for shippers and buyers. The European market has to be large enough to absorb the gas and thus justify the billions of dollars in investment. In the meantime, Russia’s interest is to discourage the Fourth Corridor, which would somewhat erode its own market position in Europe, and move quickly to preempt with its own new pipelines.17

This clash of pipeline politics is further unsettled by the potential for alternative new supplies—from the global LNG market. These supplies could greatly increase, both because of the growing LNG capacity around the world and the disappearance of the U.S. market owing to shale gas. These additional volumes of LNG would compete with present and future pipeline gas, putting downward pressure on all gas prices and thus making the economics of new pipeline projects more problematic. In addition, a major new source of gas might be opening up on Europe’s doorstep in the eastern Mediterranean. The deepwater Leviathan field offshore Israel is one of the largest discoveries so far this century.

And then there is the potential for shale gas. There is no geologic law that restricts shale gas to North America. Only around 2009 did serious work on shale gas begin to determine how abundant shale gas is in Europe, and how difficult to extract. A new study suggests that Europe’s endowment of unconventional gas—shale gas and coal-bed methane—may be as large as that of North America. Development of these resources could provide an alternative to gas imports, whether they come by pipeline from the east or by ship in the form of LNG.18

But it is still early days, and a great deal of effort will be required to develop such resources. Obstacles will range from local opposition and national policy to lack of infrastructure and sheer density of population. Still the imperatives of diversification will likely fuel the development of unconventional gas resources in some parts of Europe, as elsewhere—most notably in Poland and Ukraine. The new supplies will compensate for declining conventional domestic supplies. Moreover, by enhancing the sense of security and diversification around gas supplies, the development of unconventional gas could end up bolstering confidence in relying on expanded gas imports.



A FUEL FOR THE FUTURE

Natural gas is a fuel of the future. World consumption has tripled over the last thirty years, and demand could grow another 50 percent over the next two decades. Its share of the total energy market is also growing. World consumption on an energy-equivalent basis was only 45 percent that of oil; today it is about 70 percent. The reasons are clear: It is a relatively low-carbon resource. It is also a flexible fuel that could play a larger role in electric power, both for its own features and as an effective—and indeed necessary—complement to greater reliance on renewable generation. And technology is making it more and more available, whether in terms of advances in conventional drilling, the ability to move it over long-distance pipelines, the expansion of LNG onto much larger scale, or, most recently, the revolution in unconventional natural gas.

A few years ago the focus was mainly on rapid growth in LNG. With that went a widespread belief that a true world gas market was in the making, one in which supplies would easily move to one market or another, and one in which prices would converge. The arrival of shale gas has, for the time being, disproved that assumption. Yet the emergence of this new resource in North America is certainly having a worldwide impact—demonstrating that the gas market is global after all—just not quite in the way that would have been expected.


PART THREE

The Electric Age


17

ALTERNATING CURRENTS

Electricity underpins modern civilization. This fundamental truth is often expressed in terms of “keeping the lights on,” which is appropriate, as lighting was electricity’s first major market and remains a necessity. But today that phrase is also a metaphor for its pervasiveness and essentiality. Electricity delivers a precision unmatched by any other form of energy; it is also almost infinitely versatile in how it can be used.

Consider what would not work and would not happen without electric power. Obviously, no refrigerators, no air-conditioning, no television, no elevators. It is essential for every kind of industrial processing. The new digital world relies on electricity’s precision to drive everything that runs on microprocessors—computers, telephones, smart phones, medical equipment, espresso machines. Electricity makes possible and integrates the real-time networks of communications, finance, and trade that shape the world economy. And its importance only grows, as most new energy-consuming devices require electricity.1

Electricity may be all-pervasive. But it is also mostly taken for granted, much more so than oil. After all, gasoline usage requires the conscious activity once or twice a week of pulling into the filling station and filling up. To tap into electricity, all one needs to do is flip a switch. When people think about power, it’s usually only when the monthly bill arrives or on those infrequent times when the lights are suddenly extinguished either by a storm or some breakdown in the delivery system.

All this electrification did indeed begin with a flip of a switch.



THE WIZARD OF MENLO PARK

On the afternoon of September 4, 1882, the polymathic inventor Thomas Edison was in the Wall Street offices of the nation’s most powerful banker, J. P. Morgan. At 3:00 p.m., Edison threw the switch. “They’re on!” a Morgan director exclaimed, as a hundred lightbulbs lit up, filling the room with their light.2

Nearby, at the same moment, 52 bulbs went on in the offices of the New York Times, which proclaimed the new electric light “soft,” and “graceful to the eye . . . without a particle of flicker to make the head ache.” The current for these bulbs flowed underground, through wires and tubes, from a coal-fired electric generating plant that Edison had built a few blocks away, on Pearl Street, partly financed by J. P. Morgan, to serve one square mile of lower Manhattan. With that, the age of electricity had begun.

The Pearl Street station was the first central generating plant in the United States. It was also a major engineering challenge for Edison and his organization; it required the building of six huge “dynamos,” or generators, which, at 27 tons each, were nicknamed “Jumbos” after the huge elephant from Africa with which the circus showman P. T. Barnum was then touring America.

Another landmark event in electric power occurred a few months later, on January 18, 1883. That was the first electricity bill ever—dispatched to the Ansonia Brass and Copper Company, for the historic sum of $50.44.3



It had required a decade of intense, almost round-the-clock work by Thomas Edison and his team to get to that electric moment on Pearl Street. Still only in his midthirties at the time, Edison had already made himself America’s most celebrated inventor with his breakthroughs on the telegraph and the phonograph. He was also said to be the most famous American in the rest of the world. Edison was to establish the record for the greatest number of American patents ever issued to one person—a total of 1,093. Much later, well into the twentieth century, newspaper and magazine polls continued to select him as America’s “greatest” and “most useful citizen.”

Edison was largely self-taught; he had only a couple of years of formal schooling, plus six years as an itinerant telegrapher, making such achievements even more remarkable. His partial deafness made him somewhat isolated and self-centered, but also gave him an unusual capacity for concentration and creativity. He proceeded by experiment, reasoning, and sheer determination, and, as he once said, “by methods which I could not explain.” He had set up a research laboratory in Menlo Park, New Jersey, with the ambitious aim, as he put it, of making an invention factory that would deliver “a minor invention every ten days and a big thing every six months or so.”4



“THE SUBDIVISION OF LIGHT”

That was not so easy, as he found when he homed in on electricity. He wanted to replace the then-prevalent gas-fired lamp. What he also wanted to do, in his own words, was to “subdivide” light; that is, deliver electric light not just over a few large streetlights as was then possible, but make it “subdivided so that it could be brought into private homes.”

Many scoffed at Edison’s grand ambition. Experts appointed by the British Parliament dismissed Edison’s research as “good enough for our transatlantic friends” but “unworthy of the attention of practical or scientific men.”

To prove them wrong and successfully subdivide light, Edison would have to create an entire system—not just the lightbulb but also the means to generate electricity and distribute it across a city. “Edison’s genius,” one scholar has written, “lay in his ability to direct a process involving problem identification, solution as idea, research and development, and introduction into use.” His aim was not just to invent a better lightbulb (there had already been 20 or so of one kind or another) but to introduce an entire system of lighting—and to do so on a commercial basis, and as quickly as possible.5

The inventor had to start somewhere, which did mean with the lightbulb. The challenge, for a practical bulb, was to find a filament that, when electricity flowed through it, would give off a pleasing light but that also could last not just one hour but for many hours. After experimenting with a wide variety of possible sources—including hairs from the beards of two of his employees—he came up with a series of carbon filament, first made from cotton thread and then from cardboard and then bamboo, that passed the test.



Years of acrimonious and expensive litigation followed among Edison and other competing lightbulb inventors over who had infringed whose patents. The U.S. Court of Appeals finally resolved the legal fight in the United States in 1892. In Britain, however, the court upheld competing patents by the English scientist Joseph Wilson Swan. Rather than fight Swan, Edison established a joint venture with him to manufacture lightbulbs in Britain.

To create an entire system required considerable funding. Although not called such at the time, one of the other inventions that could be credited to Edison and his investors was venture capital. For what he developed in Menlo Park, New Jersey, was a forerunner of the venture capital industry that would grow, coincidentally, around another Menlo Park—this one in Silicon Valley in California. As an Edison biographer has observed, it was his melding of the “laboratory and business enterprise that enabled him to succeed.”6

Costs were a constant problem, and as they increased, so did the pressures. The price of copper, needed for the wires, kept going up. “It is very expensive experimenting ,” Edison moaned at one point. The rising costs strained his relations with his investors, leading him to complain, “Capital is timid.”

But he did keep happy his lead investor—J. P. Morgan—by wiring up Morgan’s Italianate mansion on Madison Avenue in the East 30s in New York City with 385 bulbs. That required the installation of a steam engine and electric generators in a specially dug cellar under the mansion. The clanging noise irritated not only the neighbors but also Mrs. Morgan. Moreover, the system required a technician to be on duty from 3:00 p.m. to 11:00 p.m. every day, which was not exactly efficient. Making matters worse, one night Edison’s wiring set J. P. Morgan’s library on fire. But, through it all, Morgan remained phlegmatic, with his eye on the objective. “I hope the Edison Company appreciates the value of my house as an experimental station,” the banker dryly remarked.7



“BATTLE OF THE CURRENTS”

Except for Morgan’s mansion, Edison concentrated on developing central generating stations that would supply part of the city. But Edison’s system had a major limiting flaw. Because of its low voltage, Edison’s direct current electricity could not travel very far. If Edison had had his way, every square mile of a city would have needed its own generating plant, which would have certainly minimized the economies of scale and much slowed the spread of electric power.

Alternating current—otherwise known as AC—provided an alternative. The Pittsburgh industrialist George Westinghouse had acquired the patent of a brilliant but eccentric Serbian inventor, Nikola Tesla, that made alternating current practical. A transformer would step up electricity to much higher voltage, which meant it could be economically transported long distances over transmission lines, and then stepped down at the other end for subdivision into individual homes. That made possible larger generating plants, serving a much greater area. With that came true economies of scale and much lower costs.

What followed was a titanic struggle between Edison and Westinghouse. Because electricity was a networks system, there could be only one winner, and the outcome would be winner takes all.

Edison threw all his formidable prestige into his furious battle against alternating current, denouncing it as unsafe and warning that it would lead to people’s accidental electrocution. At that time, electrocution happened to be much in the news, as the state of New York was considering the electric chair as the preferred method for executions. The state’s electrocution expert, also secretly working for Edison, sought to inextricably link alternating current with electrocution and death by the electric chair. As part of the campaign, Edison himself electrocuted animals to demonstrate the dangers of alternating current. Edison’s group went further and tried to dub the electric chair “the Westinghouse” and to describe execution by electrocution as being “Westinghoused.”8

Yet the superiority of alternating current was so clear that Westinghouse’s alternating current system prevailed. Westinghouse grabbed market share from Edison and established the foundations for large-scale generation. Edison’s technological stubbornness weakened his company financially during a time of business difficulties. His company, Edison General Electric—against his own fervent protestations—was forced into a merger with a competitor. To add to the ignominy and Edison’s pain, the merger stripped his name from the merged company. Thereafter, it would be known simply as General Electric.

Electricity’s ascendancy was on display at the Chicago World’s Fair of 1893, which was so popular that the number of people attending it over six months was equivalent to more than a third of the entire population of the United States. The throngs were amazed by the demonstrations of all the versatile things electricity could do. One of them was something remarkable that most had never seen before: the world’s fair emblazoned the night, earning Chicago the nickname “the white city.” At the fair’s center the General Electric Company erected the “Tower of Light” as a tribute to Edison. But the exposition also demonstrated Westinghouse’s victory over Edison, for it was Westinghouse and Tesla’s alternating current that powered most of the lighting and the exhibits.9

The technical pieces were now in place for the growth of electric power. But what would be the business model?



THE METER MAN

Samuel Insull first went to work in London at the age of fourteen as an office boy at the British magazine Vanity Fair. Then, answering a classified advertisement, he was hired as a secretary in the office of the European representative of Thomas Edison. There he made such a good impression that the chief engineer recommended him to the inventor, and in 1881 Insull emigrated to America to work as Edison’s secretary. The first day Insull arrived in Menlo Park, Edison kept him up to midnight, dictating to him, and then told him to get some rest, as they would start again at six in the morning. The 117-pound Insull quickly established himself as the dynamo in Edison’s organization. After Edison lost control of the company in 1892, Insull moved to Chicago to take over one of the 20 or so competing generating companies in the city.10

In the early 1890s, electricity was still a luxury product. Customers were charged by the number of bulbs installed in their homes or offices. Insull had much grander ambitions. He wanted scale: he wanted to lower prices and sell to as many people as he could and by so doing democratize electricity. He couldn’t get there by having people pay by the bulb. But how to do it? As often happens in innovation, Insull stumbled upon the answer by accident.

On a trip to England in 1894, Insull, worn out by his frenetic pace, decided to go down to the seaside resort of Brighton for a little rest. As evening rolled in, he was stunned to see the town light up. All the shops, no matter what their size, were bright with electric lights. How could this be? The manager of the local power plant, it turned out, had invented a meter that could measure how much electricity each store or home used. This made possible a new business model: instead of paying by the bulb, people could pay by their usage, along with an additional charge covering the capital invested in the project. “We had to go to Europe,” Insull explained afterward, “to learn something about the principles underlying the sale of the product.”11

The meter, imported by Insull to Chicago, would become the interface, the middleman so to speak, between the generating company and the customer. Electricity could be priced by consumption, not by the number of bulbs. This facilitated the scale that Insull wanted and helped propel the vast growth in his business. Insull did everything else he could to get scale, from aggressive marketing to installing the world’s largest generators to gaining new customers like the rapidly expanding trolley lines, which he could electrify—all in order to sell to the most people he could, at the lowest prices possible. Insull assured other utility executives in 1910 that if they priced their product cheaply enough, they would greatly increase their sales and “you will begin to realize the possibilities of this business, and these possibilities may exceed your wildest dreams.”12



“NATURAL MONOPOLY”: THE REGULATORY BARGAIN

To build his empire, Insull used the great financial innovation of the day—the holding company—a company that controls part of or all the interest in another company or companies. Insull constructed a pyramid of these holding companies, with each tier holding a controlling interest in the one below, on down to the base—the power plants themselves. In such a way, Insull, through his holding companies, could control a huge amount of assets with a relatively small outlay of capital, and thus reap outsize returns.

To build out the pyramid base, Insull would acquire local electric utilities and close their small, inefficient power plants, and build much larger central stations plus the transmission lines to serve groups of localities. Access to electricity would be much expanded, and prices would come down. In this way his companies became the provider of electricity to millions of Americans.

But chaotic competition threatened this new model. An electric generating company typically had to obtain a franchise from the municipality, and the municipality might grant franchises to a number of competing companies. Moreover, in many cases, the whole business of franchising could become quite corrupt—a franchise granted could also be a franchise withdrawn.

Altogether, between 1882 and 1905, the city of Chicago granted 29 power franchises, plus another 18 from towns that it had absorbed. Some of the franchises were as small as “a few blocks on the northwest side” or “the old twelfth ward.” Three of them covered the entire city. At one point, members of the Chicago City Council and their friends set up a competitive power company with the obvious purpose of forcing Insull to purchase it at vastly inflated prices. Such was Insull’s muscle, however, that he was able to knock down the price. The political instability surrounding a franchise made capital raising difficult; yet this industry had an enormous appetite for investment capital in order to expand and achieve the greater efficiencies and lower costs that came from larger scale.13

Faced with such a treacherous business environment, Insull promoted yet another innovation—this, not technical, but political: it was the regulatory bargain. Because of the large investment required by the business, the economics of this industry dictated, in his view, that it be a monopoly. But he argued it was a particular kind of monopoly—a “natural monopoly.” It was very wasteful to have two companies laying wires down the same alley and building capacity and competing head to head to supply the same customer. Costs to the customer would end up higher, not lower. By contrast, because of the efficiency of its investment, a natural monopoly would deliver lower prices to the consumer.

This was where the bargain came in. Insull recognized the political reality: If “the business was a natural monopoly,” he said, “it must of necessity be regulated by some form of governmental authority”—specifically a state public utility commission, which would determine the “fairness” of its rates. For, he said, “competition is an unsound economic regulator” in the electricity business. This call for government regulation hardly endeared him to many of his fellow electricity entrepreneurs, but it became the way the business worked. In due course, this regulatory bargain was ingrained into public policy: as a natural monopoly, the electric power business had to be treated as a regulated industry with its rates and its profits determined by a public utility commission. What was required of the regulators in turn was, as Supreme Court Justice Oliver Wendell Holmes Jr. wrote in 1912, “fair interpretation of a bargain.” 14

Wisconsin and New York established the first such commissions in 1907. By the 1920s, about half the states had done so, and eventually all of them did. This new regulatory bargain imposed a fundamental responsibility on the natural monopolist—the utility had the obligation to “serve”—to deliver electricity to virtually everyone in its territory and provide acceptable, reliable service at reasonable cost. Otherwise, it would lose its license to operate.



ELEKTROPOLIS: TECHNOLOGY TRANSFER ACROSS THE SEAS

Chicago, lit up by Insull, became the world’s showcase for electricity. It had only one rival: Berlin, which became known around the world as Elektropolis.

The inventor Werner von Siemens and an engineer named Emil Rathenau would be decisive figures in Berlin’s—and Germany’s—electrical preeminence. Rathenau acquired the German rights to Edison’s electrical inventions. His company achieved recognition in 1884 when it succeeded in lighting up the popular Café Bauer, on the Unter den Linden, the most prominent boulevard in Berlin. Rathenau built up what eventually became AEG—Allgemeine Elektrizitäts Gesellschaft—German for the “General Electric Company.”

By 1912 Berlin would be described as “electrically, the most important city” in Europe. Siemens and AEG became formidable companies, competing head-on for contracts to electrify cities and towns throughout Germany.

Electricity was the hallmark of progress in the late nineteenth and early twentieth centuries. Illuminating that progress, Berlin, with three million people, and Chicago, with two million, easily outshone London, which, with seven million people, was the largest—and most important—city in the Western world.

Whereas Chicago and Berlin both had centralized systems, London was highly fragmented, with 70 generating stations, about 70 different methods of charging and pricing, and 65 separate utilities, including such variegated firms as the Westminster Electric Supply Corporation, the Charing Cross Electric Supply Company, and the St James and Pall Mall Company, and many more. “Londoners who could afford electricity toasted bread in the morning with one kind, lit their offices with another, visited associates in a nearby office building using still another, and walked home along streets that were illuminated by yet another kind.”

London lagged because of the lack of a regulatory framework that would have promoted a more rational unified system. A prominent engineer complained in 1913 that London used “an absurdly small amount of electricity” for a city its size. “There is a very great danger of our not only being last, but of our remaining last.” London continued to lag for years after.15



“AIM FOR THE TOP”

In the United States by the 1920s, Samuel Insull had implemented his formidable business model—taking advantage of the economies of scale derived from centralized, mass production to provide an inexpensive product to a diverse customer base—on a grand scale. His great electric power empire stretched across the Middle West and into the East. Chicago itself showed the scale of what had been achieved. When Insull took over Chicago Edison in 1882, there were just 5,000 customers in the entire city, and they paid by the number of electric bulbs. The optimistic view at that time was “as many as 25,000 Chicagoans might ultimately use electricity.”

But by the 1920s, 95 percent of the homes in Chicago were wired for electricity. And they paid by usage. This was the prototype of Insull’s vision for the world: “Every home, every factory and every transportation line will obtain its energy from one common source, for the simple reason that that will be the cheapest way to produce and distribute it.” By the boom years of the 1920s, Insull himself had become not only one of the most famous businessmen in the world but also an icon of capitalism. Many saw him as the greatest business statesman of the age, his words were venerated like those of a sage, and “Insullism” was applauded as the future of capitalism.16

At the peak, in 1929, Insull’s empire of holding and operating companies, valued in the billions, controlled power companies in 32 states; and he held 65 chairmanships, 85 directorships, and 11 presidencies. He was a man of wide renown and a great benefactor. He was the “presiding angel” of the Civic Opera House in Chicago and was responsible for its building.

Reporters constantly sought out his wisdom. Asked by one reporter for his advice to young men starting out, he said, “Aim for the top.” And what was his “greatest ambition in life”?

“To hand down my name as clean as I received it,” he replied.

That was not quite to be.17



“I HAVE ERRED”: TOO MUCH DEBT

In the booming late 1920s, Insull’s empire went on a buying spree, acquiring new companies and consolidating control of its holdings—all of this at higher and higher prices. In December 1928 he created a new company, Insull Utilities Investments, and to assure his control over his empire, he issued shares to the public at $12. Before the summer of 1929 was out, the stock had hit $150.18

The business required continually greater scale to bring down costs, deliver cheaper power, expand the customer base—and to assure profits. But such expansion created enormous capital needs, which Insull met by taking on more debt and by selling common stock to customers and the public. Insull relentlessly pursued growth. Even after the 1929 stock crash, his companies were still making investments, and taking on still more debt with some abandon. The enterprise became leveraged to an extraordinary degree. Moreover, Insull’s accounting practices were suspect. His companies, it was said, would overcharge each other for services. They would also sell assets among themselves, marking up the book values after the sales; they virtually ignored accounting for the depreciation of assets. The whole business was predicated on Insull’s ability to continue raising massive sums, even as investors had little understanding of the actual finances of the companies. But time was running out.

As the Great Depression deepened and the stock market continued to decline, banks began to call in their loans from Insull. The ugly reality became clear: the debt he had taken on for acquisitions far exceeded the value of the stock that had been pledged as collateral, for the value of this stock had been plummeting. “I have erred,” Insull said. “My greatest error was in underestimating the effect of the financial panic.”19

In 1932 Insull’s whole empire collapsed, done in by its debt and intricately complex corporate structure. When the bankers, at a meeting in New York, told Insull that they would give him no more respite and were pulling his loans, he is reported to have said, “I wish my time on earth had already come.”

The New York Times had described Insull as a man of “foresight and vision . . . one of the foremost and greatest builders of American industrial empires.” But now Insull was disgraced; “too broke to be bankrupt,” said one banker. Insull’s fall from the pinnacle was as precipitous and calamitous as any in American history.20

Thousands of small investors were left holding securities worth only pennies on the dollar. The federal government launched criminal charges against him for fraud and embezzlement. Not only was he now poor, he had also become, according to both prosecutors and much of the public, a scoundrel, an embezzler, and a crook. Everything else was forgotten.

But Insull was more than the scapegoat for the Great Depression. He had very quickly become the very embodiment of the evils of capitalism in an economically prostrate country that was close to losing faith in the system. Franklin Roosevelt, campaigning for president in 1932, pledged “to get” the Insulls.

Insull fled the country. He chartered a Greek freighter to cruise the Mediterranean while considering taking up an offer to become minister of power of Rumania or seeking political asylum elsewhere. When he docked in Istanbul, the Turkish authorities arrested him and packed him back to the United States, where the small, white-haired 74-year-old was transited, under armed guard, to a Chicago courthouse. Formidable prosecutorial talent was arrayed against Insull.

The jury took just five minutes to come to its decision. But the jurors, in order to avoid any suspicion, used various ruses to stall, including ordering in a cake and coffee and holding a birthday party for one of them. Finally, the jurors walked back into the courtroom with their decision. Insull was not guilty.

Despite his acquittal, Insull decided it would be better to live out his life in Paris. He had lost virtually all his money; even the ownership of his shirt studs became the subject of a lawsuit. In order to save money, Insull made his way around the city on the Paris Metro. In 1938 he collapsed from a heart attack at the Place de la Concorde station. There he died clutching a subway ticket in his right hand. The press pounced on the fact that the great capitalist, the architect of the modern electric power industry, had died so poor that he was found virtually penniless, with just a few centimes in his pocket. He had little in personal effects to leave behind. His legacy was the business model for electric power.21



THE NEW DEAL: COMPLETING THE ELECTRIFICATION OF AMERICA

The hostility toward Insull and the holding-company structure was enormous. It was widely believed that speculators and bankers had used the holding-company system to gouge customers, loot the utilities, and make inordinate and unconscionable profits. The Federal Trade Commission left no doubt of its view of the system epitomized by Insull with the following words—“fraud, deceit, misrepresentation, dishonesty, breach of trust and oppression.”22

But Insull’s vision had also made electricity available to millions of Americans. “The decades of complex system-building were easier to ignore or forget,” wrote one scholar of the electric power industry. “They involved difficult concepts, esoteric technology, uncommon economics, and sophisticated management.” Insull’s empire, and the business model he developed, brought the U.S. public affordable reliable electric power in a remarkably short period of time.

A top New Deal priority was to eliminate the holding-company system pioneered by Insull and by which most of the U.S. power industry operated. The utilities and their supporters fought back in the most contentious and bitter domestic political battle of the entire New Deal. “I am against private socialism of concentrated power as thoroughly as I am against governmental socialism,” Roosevelt declared. And in the end the New Deal did prevail with the historic Public Utility Holding Act of 1935, which defined the new legal structure of the electric power industry. Designed to get the Insulls out of electric power, it dealt what was triumphantly called “the death sentence” to the kind of complex holding-company network that Insull had masterminded. Holding companies were effectively permitted only for utilities that were geographically adjacent and in some way physically integrated.23

But when it came to electricity, the nation was divided in two. City dwellers had easy access to power, provided either by investor-owned utilities or municipality owned. But rural dwellers had almost no access. Investor-owned utilities were not stringing lines out into the countryside because, they said, the costs were too high and the load density too low.

This left farmers stuck in the nineteenth century with endless hours of backbreaking labor. Cows had to be milked by hand. There were no refrigerators to keep food fresh long enough to get it to market. It was even worse for the farmer’s wife. Hours had to be spent tending the hot stove; more hours beating the laundry clean outside. By one estimate, it took 63 eight-hour days a year per farm to pump and haul water back to the house. Half of all farm families did their laundry and bathed their children outside. All because there was no electricity.24

This changed with the New Deal, beginning with a federally owned dam at Muscle Shoals, in Alabama, which had been built to provide power for manufacturing explosives during the First World War. After a bruising political battle, it became the starting point for the government-owned Tennessee Valley Authority, with another 20 or so dams to be built as part of the system.

In 1936 Roosevelt signed legislation creating the Rural Electrification Administration. It provided loans to rural cooperatives, which built transmission and distribution lines to isolated farms across America that, until then, had had to depend upon kerosene lamps for their light and exhausting labor for their power. Some of the co-ops also went into electricity generation.

Other legislation established marketing authorities that gave preference to rural cooperatives and municipals for the power generated by the big new federal dams, like Bonneville and Grand Coulee in the northwest, and the Hoover Dam on the Colorado. The REA and the cooperatives that worked with it transformed the life of rural America.



“LIVE BETTER ELECTRICALLY”

The 1950s and 1960s were the years in which America really became an electrified society. With the end of World War II, millions of U.S. soldiers returned home. Rising marriage and birth rates, combined with the G.I. Bill that made it easier for veterans to purchase homes, led to a surge in demand for new houses. A great suburban house-building movement rolled out from the cities, with more than 13 million new homes built in the United States between 1945 and 1954—and with electricity playing an increasingly important role in the American home and American life. During the postwar years of the 1950s, U.S. electricity demand grew at an astounding annual rate of 10 percent (compared with about 1 percent in recent years) as more and more uses were found for electricity in homes, offices, and factories.25

Nothing so much captured the build-out of electricity in the postwar era as General Electric’s “Live Better Electrically” campaign, launched in the mid-1950s and supported by 300 utilities. But such a campaign needed a spokesman, indeed a national champion. It turned toward Hollywood.

In the early 1950s, Ronald Reagan’s movie career was not going all that well. Yes, he was a well-known screen actor, but not quite a top leading man. As president of the Screen Actors Guild, the actors union, he had certainly honed his political skills behind the scenes, but that had done nothing to advance his presence on the silver screen. He and his wife Nancy had a baby at home, but no scripts or paychecks were coming into the house. Finally, his agent landed him a job at the Last Frontier Hotel in Las Vegas, doing stand-up comedy and opening for a singing group called the Continentals. Though Reagan protested that he neither sang nor danced, the money was good, and the two-week show was sold out, but he found the work boring, and he and Nancy had no interest in the gaming tables. This was not why he had become an actor.

Then his agent called with a more interesting offer: to host a proposed television series called GE Theater and become the roving ambassador for General Electric. The pay was very good—$125,000 a year ($1 million in today’s money). He took it. Over the next eight years he spent a great deal of time on the road—the equivalent of two years—visiting 135 GE plants around the country, giving speeches, and meeting 250,000 GE workers. The time away from home was lengthened by his contract, which permitted him to avoid airplanes and crisscross the country only by train and car because of his fear of flying. (As he wrote to a friend in 1955, “I am one of those prehistoric people who won’t fly.”) In the course of those years on the road for GE, he developed “the speech”—the thematic amalgam of patriotism, American values, criticism of big government and regulation, and anecdotes and affable good humor—that would launch him into the governorship of California and then onto the presidency. But that was all in the future. In the meantime, GE Theater, with Ronald Reagan at the helm, became one of the top-rated shows on Sunday night.26

General Electric also turned the Reagan home in the Pacific Palisades section of Los Angeles into a stunning showcase for the all-electric home—“the most electric house in the country,” Reagan called it. “We found ourselves with more refrigerators, ovens and fancy lights than we could use,” Nancy Reagan said. GE kept finding new appliances to deliver—a color television, a refrigerated wine cellar, and an amazing new innovation, an electric garbage disposal. So great was the extra electric load that it had to be accommodated with additional wiring and a three-thousand-pound steel cabinet on the side of the house. Reagan would joke that they had a direct electric line to Hoover Dam.27

And so, long before Ronald Reagan became the fortieth president of the United States and the global proponent for freedom and free markets, he already became the fervent advocate for the “all-electric home.” In a series of television commercials, he and Nancy invited viewers into their all-electric home, where they extolled many of their GE appliances, ranging from a toaster oven to a vacuum cleaner to a waffle iron to a portable television that they proudly carried onto their patio and out by the pool.

“My electric servants do everything,” said Nancy Reagan, as her husband savored the coffee from an electric coffee maker.

“That’s the point of living better electrically,” replied a beaming Reagan.

After giving their young daughter, Patti, a tour around the house, and letting her identify all their household appliances, Nancy Reagan said, “It makes quite a difference in how we live.”

To those who had lived through the deprivation of the Depression in America’s cities and on its farms, the electric home and those “electric servants” truly did mean a veritable revolution in the quality and ease of domestic life. With what was already that characteristically affable shake of his head, Reagan summed it up, “You really begin to live when you live better . . .” And then his daughter jumped in to enthusiastically add, “Electrically!”28



Here it was—the American Dream and what would become a dream around the world—all electric. Or, at least, increasingly electric. Living better electrically was reflected in the rapid growth in the nation’s consumption of electricity. But how to generate the electricity to meet the nation’s growing demands for power?


18

THE NUCLEAR CYCLE

It was an odd location for a president-elect to be briefed on the most dire threat facing the world. But the small office belonging to the club manager was the only place readily available at the Augusta National Golf Club in Georgia, where Dwight Eisenhower was on a golfing vacation after his electoral victory in 1952.

What Eisenhower learned that morning was very sobering. The subject was the growing risk of nuclear war.

Seven years earlier, two atomic bombs detonated over the Japanese cities of Hiroshima and Nagasaki had brought the Second World War to a sudden conclusion. In the immediate postwar years, the United States, with its ally Britain, held what seemed to be an atomic monopoly. But then in 1949, in what was a stunning shock, the Soviet Union, abetted by a network of spy rings, tested its first atomic bomb well ahead of what was anticipated.1

That November morning in 1952, Eisenhower began by asking the briefer, a senior official from the Atomic Energy Commission, about the pluses and minuses of combining in a single facility the generation of civilian nuclear electricity with the production of weapons-grade fuel. Then, getting down to the immediate business at hand, the briefer pulled the top-secret documents from an oversize envelope. The topic, on which the new president needed to be urgently informed, was the state of the nuclear arsenal and the fearsome rate at which destructive power was growing.


The Quest

A little more than a week earlier, the United States had tested “Mike”—“the first full-scale thermonuclear device,” said one of the documents—the prototype of a far more powerful hydrogen bomb, 150 times more powerful than the atomic bomb. The Pacific island on which “Mike” had been tested was now, in the stark words of the document, “missing,” replaced by an underwater crater almost a mile in diameter. Eisenhower instantly absorbed the significance. There was now, he said, “enough destructive power to destroy everything.” He worried about the dangerous temptation to think that such weapons “could be used like other weapons.”

After the meeting, the first thing that the briefer did, even before getting back on the plane, was to burn the secret documents.2



The dangers of nuclear conflict would deeply preoccupy Eisenhower throughout his presidency. He had been Supreme Commander in Europe during World War II, and he knew that the U.S. nuclear arsenal was already several times more destructive than all the munitions exploded during the war. The Russians were headed down the same path.

Was there not some way to temper the arms race and move the “atom” onto a more peaceful path? The death of Joseph Stalin in March 1953 held out that prospect, possibly. But then in August 1953, a Soviet weapon test—nicknamed “Joe 4”—set off new alarms, since it seemed to indicate that the Soviet Union was also far along in developing a hydrogen bomb. There was much discussion in the U.S. government about how to slow down the arms race, including a set of proposals code-named “Project Wheaties” and the seemingly endless redrafting of a major presidential address for the United Nations on the nuclear danger. “We don’t want to scare the country to death,” Eisenhower instructed his speechwriter. But he was determined to take the initiative. “The world is racing towards catastrophe,” he wrote in his diary. “Something must be done to put a brake on this movement.” At the same time, as the Atomic Energy Commission put it in a memo to the president, achieving “economically competitive nuclear power” was “a goal of national importance.”

In his address at the United Nations, delivered in December 1953, Eisenhower tried to sketch out that different path. It might or might not work, but it had to be tried. “Atoms for Peace” is what Eisenhower called it. He summarized the buildup of the nuclear arsenals. But he also called for U.S.-Soviet cooperation to modulate the nuclear arms race and to commit to the development of the peaceful atom for people around the world. That meant, primarily, the generation of electricity with nuclear power. “Peaceful power from atomic energy is no dream of the future,” he promised.3



The way nuclear energy was developed after World War II still shapes its role—present and potential—in the twenty-first century. That begins with designs themselves. At the heart of all of the reactor designs is a core where radioactive material generates a controlled chain reaction, releasing a great amount of energy and heat. Where the designs differ is in the coolant that flows around the core, keeping it from getting too hot while at the same time becoming hot enough itself to produce steam, which in turn drives a turbine and produces electricity. For its coolant, Canada’s CANDU reactor used heavy water, a variant of natural water that occurs rarely in nature. A British design used gas rather than water as the coolant.

But the most common type of reactor, developed in the United States, uses light water—which is another term for normal water—for the coolant. As the water circles the core, it is heated to such a level as to produce, either directly or indirectly, the steam to drive a turbine. The light-water reactor is the basis for about 90 percent of the 440 or so nuclear reactors currently operational in the world, and virtually all those presently planned.

Whatever the coolant, it is typical to speak of the nuclear-fuel cycle. For the light-water reactor, the cycle begins with the mining of uranium and then moves to enrichment to increase the concentration of the isotope U-235 to a level that will be able to sustain a controlled chain reaction. This more-concentrated fuel is then fabricated into fuel rods that will be inserted into the reactor. The cycle continues through the use of the fuel in the reactor all the way through to the deposition of the spent fuel in some form of storage or possible reuse.

The origins of the light-water reactor go back to the way in which the U.S. Navy, after World War II, set out to harness the atom to power its submarine fleet. It owes its predominance to the single-minded drive of one person, an intensely focused engineer, Admiral Hyman Rickover. “Widely considered to be the greatest engineer of all time” is how President Jimmy Carter described him. Rickover, who achieved the virtually unheard-of feat of spending 63 years on active duty, was not only, as he is remembered today, the father of the nuclear navy; he is also, to a very considerable degree, the father of today’s nuclear power industry.4



THE ADMIRAL

“Everything in my life has been sort of a coincidence,” Rickover once said. Hyman Rickover was born Chaim Rickover in a small village in a Jewish shtetl of czarist-ruled Poland, most of whose inhabitants would eventually perish in the Holocaust. At age six Rickover immigrated to the United States with his mother and sister. His father, a tailor, who had gone ahead to New York, did not know they had arrived. His mother, tricked out of her money on the ship over and now penniless, was being held in detention with her children. Just before they were to be deported back to Poland, his father learned by chance that they were stuck in immigration and eventually stumbled on them on Ellis Island. The Rickovers settled in Chicago. The family was so poor that the boy had to take his first job, age nine, holding a lantern in a machine shop. While in high school, Rickover worked the night shift, from 3:00 to 11:00, at the Western Union telegraph agency. A picture from the 1916 Republican convention in Chicago shows him standing stiffly at attention in his Western Union uniform as he would later stand in his naval uniform. Through a lucky fluke, he won a nomination to the Naval Academy at Annapolis.5

Anxious, fearful of failure, and certainly no athlete—and subject to extra hazing because he was Jewish—Rickover spent every moment he could at the academy studying. He was, as he later put it, “trying to get by, stay alive.” At night when the library closed, he even crammed himself into an unused shower stall to get in extra time with his books. Rickover may not have been the most popular midshipman in his class, but he graduated with distinction. However, as a result of a naval disarmament treaty, it looked as though there would be few career berths in the navy for the Annapolis graduates, including Rickover. Deeply disappointed, he secured an entry-level engineering job at Chicago’s Commonwealth Edison, the linchpin of Samuel Insull’s empire. But then, a naval posting became available. Rickover subsequently served on two submarines—one, the S-48, of such faulty, sooty, dangerous and repellent engineering as to sear into Rickover’s soul a fanaticism about the absolute importance of high engineering standards. This conviction would infuse everything he did thereafter.6

During World War II, Rickover headed the Electrical Section in the Bureau of Ships. There he honed his zealotry for excellence and an obsession with precision. “An organizer & leader of outstanding ability,” said his final fitness report, and “one of the country’s foremost engineers.” What this report did not include was his driving, domineering, irascible, abrasive, sometimes hypersensitive, extremely confident personality. This was the flip side of his single-minded focus on mission and extraordinarily demanding nature. This combination of qualities would make some forever loyal to him and others, bitter enemies—later including much of the senior Navy brass. But, he would say, “my job was not to work within the system. My job was to get things done and make this country strong.”

“I have the charisma of a chipmunk,” Rickover, late in life, told newscaster Diane Sawyer. He added, “I never have thought I was smart. I thought the people I dealt with . . . were dumb, including you.” Sawyer quickly replied, “To be called dumb by you is to be in very good company.”7

Rickover had a distinctive gift that made him, in the eyes of many, the best engineer in the Navy. “I believe I have a unique characteristic—I can visualize machines operating right in my mind,” he once explained. “I do not think there has been anyone in the U.S. Navy who has had as much engineering experience as I have had.”8



THE NUCLEAR NAVY

After World War II, despite the dislike that many had for him, Rickover’s name was added at the last minute to the roster of naval officers dispatched to the secret atomic research city at Oak Ridge, Tennessee. Their mission was to learn about the mysteries of nuclear energy and what role it might have if harnessed in peaceful power generation.

Rickover quickly recognized the strategic potential of a nuclear navy and thereafter committed himself to realizing it. In particular, he understood that nuclear submarines could offer a range and capability that far exceeded that of the diesel-fueled submarines of World War II. By so doing, nuclear power offered an extraordinary solution to an intractable problem that bedeviled contemporary submarines—the constraints of conventional batteries, which limited the amount of time that submarines could spend at full speed underwater. By contrast, it was thought, nuclear subs should be able to cruise underwater at full speed for hours, days, or even months.

Rickover was given double duty; he was put in charge of the nuclear propulsion programs for both the navy and for the new Atomic Energy Commission. This double posting helped him to overcome the formidable engineering and bureaucratic obstacles to realizing the nuclear submarine. It was said that he would write letters to himself and then answer them, ensuring instant sign-off from both the navy and the AEC. The urgency of the program increased in 1949 with the first Soviet atomic bomb test.

It was one thing to build an atomic bomb. It was quite another to harness a controlled chain reaction of fission to generate power. So much had to be invented and developed from scratch—the technolog y, the engineering, the know-how. It was Rickover who chose the pressurized light-water reactor as the propulsion system. He also imposed “an engineering and technical discipline unknown to industry or, except for his own organization, to government.”9

To accomplish his goals, Rickover built a cadre of highly skilled and highly trained officers for the nuclear navy, who were constantly pushed to operate at peak standards of performance. If that meant being a taskmaster and a martinet, Rickover would be a taskmaster and a martinet. Even a minor oversight or deviation from Rickover’s very high standards would likely mean that an officer would be “denuked”—ejected from the nuclear service.

When interviewing candidates for the nuclear navy, Rickover would, in order to throw them off and test them, seat them in chairs with shortened front legs and at the same time position them so that the sunlight streamed through specially adjusted venetian blinds straight into their eyes. That way “they had to maintain their wits,” he explained, “while they were sliding off the chair.”10

Once, when a young submarine officer was applying to the nuclear navy, he proudly told Rickover that he had come in 59th in his class of 820 at the Naval Academy. Rickover acidly asked him if he had done his best. After a moment’s hesitation, the taken-aback officer, named James Earl Carter, admitted that he had not.

“Why not?” Rickover asked.

That question—Why Not the Best?—became the title of his campaign autobiography when, as Jimmy Carter, he ran for the presidency decades later.11

In Rickover’s tireless campaign to build a nuclear submarine and bulldoze through bureaucracy, he so alienated his superiors that he was twice passed over for promotion to admiral. It took congressional intervention to finally secure him the title.


Rickover’s methods worked. The development of the technology, the engineering , and construction for a nuclear submarine—all these were achieved in record time. The first nuclear submarine, the USS Nautilus, was commissioned in 1954. The whole enterprise had been achieved in seven years—compared with the quarter century that others had predicted. In 1958, to great acclaim, the Nautilus accomplished a formidable, indeed unthinkable, feat—it sailed 1,400 miles under the North Pole and the polar ice cap. The journey was nonstop except for those times when the ship got temporarily stuck between the massive ice cap and the shallow sea bottom. When, on the ship’s return, the Nautilus’s captain was received at the White House, the abrasive Rickover, who was ultimately responsible for the very existence of the Nautilus, was pointedly excluded from the ceremony.

At a separate meeting, the ship’s captain presented Admiral Rickover with a piece of polar ice, carefully preserved in the ship’s freezer. It was one of the rare times that those who reported to him ever saw the frosty admiral smile. By the time Rickover finally retired in 1986, 40 percent of the navy’s major combatant ships would be nuclear propelled.12



THE REACTOR AT OBNINSK

The Nautilus was the first controlled application of nuclear power for vehicle propulsion. However, in the summer of 1954, Soviet radio announced another “first” for “Soviet science”: the first civilian reaction anywhere in the world had gone into operation in the science city of Obninsk, south of Moscow. The Soviet Union, declared the Soviet news agency TASS, had “leaped ahead of Britain and the United States in the development of atomic energy.”

But the actual reactor at Obninsk was tiny, capable of supplying power only to some local collective farms and factories and a few thousand residents. It was also a forerunner of a particular type of Soviet reactor called the RBMK, which would achieve unfortunate notoriety some decades later.13



“TOO CHEAP TO METER”

Even before the launch of the Nautilus, the development of a civilian nuclear reactor was beginning. It too was under the firm control of Admiral Rickover. The civilian reactors were based upon the navy’s designs. The design is often attributed to the submarine reactors, but there was an intermediate step. After work had already begun on developing a reactor for aircraft carriers, the Eisenhower administration decided that the program would be too expensive and instead concluded that the quickest way to get to nuclear power would be by stripping the carrier propulsion project of its distinctive naval features and making it the basis for a civilian reactor.

The reaction to the Atomic Energy Commission’s announcement of the civilian program was enthusiastic. Time magazine called it a “new phase” of the atomic age; the New York Times went even further, announcing the coming age of atomic power. The optimism of the times was captured in 1954 when the head of the Atomic Energy Commission, Lewis Strauss, made what would turn into the famous prophecy that nuclear power would, within 15 years, deliver “electrical energy too cheap to meter.” 14

The first U.S. nuclear plant was built at Shippingport, Pennsylvania. It went into operation in 1957, just three years after the launch of the Nautilus. The British actually beat it by a year, with the first commercial production of nuclear power in the world at Calder Hall in Britain, which Queen Elizabeth dedicated in 1956. But Calder Hall was a small power plant (built with a design now considered obsolete).

Shippingport, by contrast, ranks as “the world’s first full-scale atomic power station.” The design and construction of the power plant was directed by none other than Admiral Hyman Rickover, who retained operational oversight for the next twenty-five years. Though the reactor had been scaled up from the one designated for an atomic-powered aircraft carrier, it had also been fundamentally rethought and redesigned to produce electric power. It performed far above its rated design and operated virtually fault free. This was credit to Rickover, with his determined exactitude, and to the team he assembled. 15

The real commercial turning point for nuclear power came in 1963, when a New Jersey utility ordered a commercial plant to be built at Oyster Creek. That reactor was also based upon the design developed under Rickover.



THE GREAT NUCLEAR BANDWAGON

Over the next few years, about 50 nuclear power plants were ordered, as utilities clambered all over each other to jump onto what was becoming known as the “great bandwagon market.” It was Thomas Edison versus George Westinghouse all over again, with General Electric and Westinghouse battling for market share with their respective versions of light-water reactors. Westinghouse championed the PWR, the pressurized-water reactor; and GE, the BWR, the boiling water reactor. Atomic energy, some projected, could provide almost half of total U.S. electricity by the first decade of the twenty-first century. One leading scientist declared, “Nuclear reactors now appear to be the cheapest of all sources of energy” with the promise of “the permanent and ubiquitous availability of cheap power.” 16

But nuclear power, it turned out, was not cheap at all. Costs went up—way up. The reasons were many and interconnected. There was not enough standardization in plants and designs. Many utilities did not have the heft and experience to take on projects that were much bigger than they had anticipated and more complex and difficult to manage. The vendors were promising more than they could deliver in a time frame that they could not meet. And there was insufficient operating experience.

At the same time, the question of “how safe is safe enough?” emerged as a burning issue. What were the risks of an accident and radiation exposure? At both the federal and state levels, licensing and permitting took much longer than expected. Growing environmental and specifically antinuclear movements prompted constant regulatory delays, reviews, and changes. Concrete walls that had already been laid in had to be rebuilt and thickened; piping had to be taken out and reworked. Plants had to be redesigned and then redesigned again and again during construction, meaning that costs went up and then went up again, far exceeding the original budgets.

The plants also became more expensive because of the general inflationary pressures of the era, and then high interest rates. Instead of six years, plants were taking ten years to build, further driving up financing costs. Plants that were supposed to cost $200 million ended up costing $2 billion. Some cost much more. “The evolution in the costs,” said an economist from the Atomic Energy Commission, with some understatement, could be “classified as a traumatic, rather than a successful, experience.”17



“THE BUDDHA IS SMILING”: PROLIFERATION

Another concern was emerging as well—about the risks of nuclear proliferation and the diversion of nuclear materials and know-how. Members of what was becoming known as the arms-control community, focusing on proliferation, added their voices to those of the antinuclear activists.

For a number of years, there was confidence that the nuclear weapons “club” was stable and highly exclusive, limited to just five members—the United States, the Soviet Union, Britain, France, and China. The doctrine of mutually assured destruction—known as MAD—offered the stability of deterrence between the United States and the Soviet Union. But then, in May of 1974, the Indian foreign minister received a cryptic phone message: “The Buddha is smiling.” He knew what that code meant; India had just exploded a “peaceful nuclear device” in the Rajasthan Desert, 100 miles from the border with Pakistan. The nuclear monopoly of the five powers had been broken, and the prospect for further proliferation was now very real.18

It was now eminently clear that a strong link—if that link was sought—existed between “peaceful nuclear power” and a nuclear weapon. There was only one atom; and the same nuclear plant that produced electricity could also produce plutonium in its spent fuel, which could be used as a weapons fuel. That was the way the Indians had done it. Moreover, an enrichment facility that turned out nuclear fuel with the 3 percent to 5 percent concentration required for a reactor could keep enriching the uranium over and over until it reached an 80 percent or 90 percent concentration of U-235. That was weapons-grade uranium, and out of that could be made an atomic bomb.

Influential scientists and members of the foreign-policy community in the United States and other countries began to question the promotion of nuclear power—not on grounds of safety, but because of the risks of proliferation. During World War II, Harvard chemistry professor George Kistiakowsky, known as “Kisty,” had been one of the chief designers of the atomic bomb at the secret Los Alamos laboratory. Later he was the White House science adviser to President Eisenhower. But now, in 1977, troubled by second thoughts, he said, “We must hold back on great expansion of nuclear power until the world gets better. It’s just too damn risky right now.”19



THREE MILE ISLAND

Whatever their bitter differences, on one thing proponents and opponents of nuclear power could absolutely agree: The core of an operating reactor had to be kept “constantly supplied with copious amounts of coolant to dissipate the heat produced by fission.” Otherwise, something terrible could happen.

And that nightmare scenario suddenly seemed about to become a reality—in the predawn hours of March 28, 1979, in Unit 2 at the Three Mile Island nuclear power plant, on the Susquehanna River, near Harrisburg, Pennsylvania. The chain reaction of events started at 4:00 a.m. with a shutdown in the feedwater pumps that were meant to keep the reactor core cool. Initially the problems were dismissed as a “normal aberration.” Then a whole series of further malfunctions and operator errors ensued, one piling on top of the next. At one point, the instrumentation misled the operators into thinking that there was too much water in the cooling system, instead of too little. They turned off the emergency cooling system and shut down the pumps that were circulating water, which eliminated their ability to remove heat from the reactor core. All this generated a sequence of events that melted part of the reactor’s core, forced a complete shutdown of the plant, and led to a minor release of radioactive steam. It also ignited fears of a major radioactive leak and a total meltdown .20

The result was immediate panic. “Nuclear Nightmare” was the cover of Time magazine. The New York Post headlined “Nuclear Leak Goes Out of Control.” Thousands of people fled their homes; residents over a wide area were instructed to keep their windows tightly shut and turn off air conditioners to prevent intake of contaminated air. Almost a million people were told to prepare for immediate evacuation.

A few days after the accident, Jimmy Carter, the nuclear engineer–turned–president, arrived by helicopter at Three Mile Island. He viewed the crippled reactor from a school bus and then, along with his wife, Roslynn, toured the plant’s control room with his shoes garbed in yellow plastic booties. The president promised to “be personally responsible for informing the American people” about the accident. Fears were further stoked by the coincidental release of a motion picture, The China Syndrome, about a nuclear meltdown. The film and its message became a national sensation, helping to feed the panic.21



THE AFTERMATH

The accident at Three Mile Island riveted the world. It also led to an overhaul of safety management, including much greater focus on human factors and preventing operator errors. Who better to provide understanding of what had gone wrong and what needed to be done than Admiral Hyman Rickover ? Jimmy Carter asked his old boss to help him with the investigation.

Rickover wrote a lengthy private letter to the president “to put the issue in perspective as I see it based on my own experience.” In a letter of lasting value for its insight into disasters, Rickover wrote:

Investigations of catastrophic accidents involving man-made devices often show that:

1. The accident resulted from a series of relatively minor equipment malfunctions followed by operator errors.

2. Timely recognition and prompt corrections . . . could have prevented the accident from becoming significant.

3. Similar equipment malfunctions and operator errors had occurred on prior occasions, but did not lead to accidents because the starting conditions, or sequence of events, were slightly different. If the earlier incidents had been heeded, and prompt corrective actions taken, the subsequent catastrophic accident would have been avoided.

4. To reduce the probability of a repetition of similar or worse catastrophic accidents, adequate technical standards must be established and enforced, and increased training of operators must be provided.

This pattern has been characteristic of broken dams, aircraft crashes, ship sinkings, explosions, industrial fires etc.

“As was predictable,” the admiral said, the investigation into Three Mile Island “revealed the same pattern.” Rickover went on to identify many problems, from lack of training and discipline in operations to lack of standardization. “For example, it makes no sense that the control room for Unit 1 at Three Mile Island is designed much differently than the control room for Unit 2, even though both reactor plants were designed by the same manufacturer.”

Rickover did warn the president against relying upon a “ ‘cops and robbers’ syndrome” between government regulators and the nuclear power industry. Government regulators would never be sufficient and could not adequately do the job. Instead the admiral advocated that the utilities come together to create a central organization that could provide “a more coordinated and expert technical input and control for the commercial nuclear power program than is presently possible for each utility with its limited staff”—a position that he had advocated for years.22

Shortly after, the nuclear power industry founded the Institute of Nuclear Power Operations to serve exactly that purpose. The institute became the industry’s own watchdog, and a very tough one, with the utilities stringently evaluating one another. The companies all understood that the viability of nuclear power in the United States was at stake and that they were all in it together. The industry could not withstand another accident. It would operate at Rickover standards.

The accident at Three Mile Island brought the great nuclear bandwagon to a screeching halt. Orders for more than 100 new reactors in the United States were eventually canceled. The last nuclear power reactor to go into operation in the United States was one that had been ordered in 1976.

The next several years proved to be a time of agony for the U.S. power industry. A few utilities went bankrupt. Others came very close. Construction was halted on plants that were as much as 90 percent completed. The Shoreham plant on Long Island was actually fully completed and underwent low-level testing. But in the face of local opposition, after producing only a small amount of power, it was shut down forever. Eventually the $6 billion plant was sold off for a grand total of one dollar to the Long Island Power Authority.

Still, over 100 nuclear power reactors did end up operating in the United States, although often at far higher cost than originally expected and with construction extended over much longer time spans than planned. They became part of the base load of the nation’s power supply. But they were not operating anywhere near their full capacities. Improving operations became the top priority for the industry. To do so it drew on the most obvious pool of talent—the alumni from Admiral Rickover’s nuclear navy. The mission of the retired naval officers was to make the fleet of existing nuclear power plants work better, at Rickover standards.

Still what was remarkable was how fast the nuclear power industry had developed and how large it had grown. The design and building program had commenced only in the early 1960s. Yet within little more than two decades, nuclear power was supplying about 20 percent of U.S. electricity, and that remained the case even after the brakes were slammed on.



FRANCE’S TRANSFORMATION

Nuclear development was also stymied in other countries. Popular opposition to nuclear power had emerged in Europe prior to Three Mile Island. Austria completed a nuclear power plant at Zwentendorf, 20 miles from Vienna. But it was never turned on and it has sat idle ever since. In many other countries, political stalemate and indecision were also slowing ambitious programs.

One country that went resolutely ahead was France. In the immediate aftermath of the 1973 embargo, Jean Blancard, the senior energy official in the government, made the case to President Georges Pompidou that France had to decisively move away from oil—especially oil in electrical generation. The nation’s electricity supply could not depend on oil, which could be cut off. “The period from here on will be quite different—a transformation, not a crisis,” Blancard said to the president. “It is not reasonable,” he continued, for France to be “dependent” on decisions from the Middle East. “We must pursue a policy of diversification.” Pompidou was more than receptive to Blancard’s argument. Though seriously ill with cancer and swollen from the effects of treatment, he convened his senior advisers and confirmed nuclear power as the way to eliminate oil from French electricity and restore autonomy to the nation’s energy position. Nuclear power, rather than oil, would increasingly be the basis of France’s energy supply, complemented by a return to coal and a new emphasis on energy efficiency.

Yet, to the government’s consternation, the nuclear program immediately ignited determined opposition across the country. Four hundred scientists signed a proclamation demanding that the government postpone the installation of new plants until all safety questions could be answered.23

Despite the protests, and large demonstrations around the country, France’s centralized political system, bolstered by the prestigious engineering culture in the upper reaches of French government, locked in the commitment. Even the election in 1981 of the Socialist François Mitterrand as president did not alter the commitment to nuclear power. Labor unions and the communists, who were part of his coalition, were already onboard, as they saw nuclear as a promoter of jobs and energy security. The fact that the state company, Électricité de France, operated the entire power industry also greatly helped. “People trusted EDF,” said Philippe de Ladoucette, chairman of France’s Commission for the Regulation of Energy. “It was seen as the ultimate French champion.” France continued to build dozens of reactors over the decades. One striking result of this continuing commitment was to propel France into the vanguard of the global nuclear supply industry.24



“BLACK STALKS”

The other European country that continued to move ahead on nuclear power was the Soviet Union. In 1963–64 the first standard-size civilian reactors in the Soviet Union were commissioned. By the middle of the 1980s, 25 reactors were operating in the Soviet Union.

One type of Soviet civilian reactor was so similar to Westinghouse’s pressurized light-water reactor that it was dubbed the “Eastinghouse.” Another design was the RBMK, a prototype of which was that first tiny reactor in the scientific city of Obninsk. The RBMK was based on a reactor developed for manufacturing weapon-grade nuclear fuel. As it was being adapted for civilian nuclear power, some Soviet scientists had warned that it was not safe and argued strongly against using it for civilian nuclear power. But the political authorities overruled the scientists. It was much cheaper to build, and it became a mainstay of Soviet nuclear power.

Four such RBMK reactors were built at the little village of Pripyat, about 65 miles north of Kiev, then the capital of the Soviet republic of Ukraine. But the plant became better known by the name of the nearby town, Chernobyl, which in Ukrainian means “black stalks,” for a long grass that was common to the region.

In the early morning hours of April 26, 1986, operators were carrying out a poorly designed experiment aimed, ironically, at enhancing the safety of the plant. Through a series of mistakes, they lost control. The first of two explosions blew the top off the reactor, followed by a fire. These reactors did not have the kind of containment vessels that were standard in the West to prevent a catastrophe. Radioactive clouds were released and carried by the winds across vast stretches of the European Continent. The first indications that something had gone seriously wrong were heightened radioactivity readings on sensors in Sweden. The word spread quickly, including back into the Soviet Union. Terrified crowds packed the railway station in Kiev, trying to squeeze onto overcrowded trains and flee the region. Fear and panic spread throughout the Soviet Union. Without any news or information, the rumors became more and more sensational.

But for more than two weeks the Soviet leadership and media denied that anything serious had happened—it was all the creation of the Western press. One senior Soviet energy official, meeting Westerners in Moscow, pounded his fist down on the table and insisted that any notion of a nuclear accident action was a total fabrication by the Western newspapers.

Then, on May 14, 1986, Soviet leader Mikhail Gorbachev went on television and in sober, somber tones did something that Soviet leaders never did: gravely reported what had actually happened. While attempting to dispel some of the sensationalism surrounding the event in the Western media, Gorbachev talked about the now-evident perils of what he called “the sinister power of uncontrolled nuclear energy.”25

This was a historic turning point. Within the Soviet Union, this accident—which according to dogma could never happen—was a major political and social shock that contributed to shattering confidence in the communist system and the myths that helped to hold it together.



THE EXCEPTIONS

Across Western Europe, Chernobyl’s impact on the energy sector was immense; it fueled and solidified the opposition to nuclear power. Italy pledged no new nuclear power plants and eventually shut down its capacity. Sweden and Germany introduced moratoria on nuclear power and aimed at a phaseout. Britain’s Atomic Energy Commission prepared to devote the rest of its days to the decommissioning of plants. Chernobyl had done in Europe what Three Mile Island had done in the United States: brought the development of new nuclear power to a stop.

In Europe, only France plowed on with its program. “France’s commitment to nuclear energy was never reconsidered, in spite of major accidents,” said Philippe de Ladoucette. “Ever since the end of World War I, energy independence had become a motto.” Bolstering all of this was the fact that so many policymakers came from a technocratic engineering background.26

With its political foundation secured, nuclear would become the indispensable baseload of French power supply. Its 58 reactors supply almost 80 percent of France’s electric power. France is also the largest exporter of electricity in the world: those sales to neighboring countries constitute France’s fourth-largest export.

In Japan, too, nuclear power plants continued to come online—with more than a dozen in the decade following Chernobyl’s meltdown. But Japan’s cultural legacy regarding nuclear power was more complicated. It was the only country to have ever suffered a nuclear attack, and the politics of nuclear power could engender a powerful emotional response from voters and politicians alike. But the oil shocks of the 1970s, which threatened to undermine Japan’s postwar economic miracle, were deeply traumatic. Indeed, so much so that the political will to support the nuclear program remained strong.

“Unlike the United States or the United Kingdom, Japan had no choice but to depend on imports for virtually all of its fossil-fuel supply,” said Masahisa Naitoh, a formerly senior energy official in Japan. Accordingly, Japan has viewed nuclear energy as “an affordable, stable electricity source and as essential for Japan’s energy security.” Rather than abandon the nuclear plan, Japan strengthened safety regulations and moved ahead. To a large extent opposition was “neutralized.” By the beginning of 2011 Japan’s 54 operating nuclear reactors were delivering 30 percent of Japan’s total power, and the official target was for nuclear power to provide 50 percent of Japan’s electricity by 2030.27 Japan’s commitment seemed immutable and unshakeable.

But Japan, along with France, was the big exception.



WHAT FUEL FOR THE FUTURE?

In the United States, the shuttering of nuclear development left a big question: If not uranium, what would be the fuel of the future in electric power? Oil was already being driven out of the electric power sector in response to the oil crises of the 1970s. Natural gas was an obvious answer. Except that in 1978, Congress had banned its use in new power plants due to the sharp increase in natural gas prices in the 1970s and the conviction that there was a shortage. Natural gas, it was said, was too valuable to be burned in power plants, but rather should be saved for higher purposes—heating homes. Nuclear power was far from being “too cheap to meter” and was now subject to a de facto moratorium.

That left only one resource: coal, which once again became the mainstay for much of the new capacity. It was domestic, it was abundant, and it provided security and dependability. But for how long? The costs of new capacity would trigger changes in the regulatory bargain that underlay the power industry in the United States—and, once again, in the decisions about fuels. The most dramatic impact would be in California.


19

BREAKING THE BARGAIN

Almost 1.5 million voters—it was the biggest win ever recorded in a California gubernatorial election: that was the overwhelming margin by which Democrat Gray Davis defeated his Republican opponent in 1998. Because of California’s importance, that triumph automatically started talk of him as a potential future president. Davis was a career Sacramento politician. He had been chief of staff to Governor Jerry Brown in the 1970s and painstakingly climbed his way up the political ladder thereafter. Indeed, so entrenched was Davis in California politics that on his election as governor, an aide joked that in the days since Davis had been chief of staff, it had taken the new governor “23 years to walk 15 feet.”1

After his first 100 days in office, Davis was more popular than his boss, Jerry Brown, had been in the same time frame and even more popular than California’s best-known former governor, Ronald Reagan. As for being governor, Davis had a plan: do nothing radical. It certainly made sense. After a deep recession, the state’s economy was surging.

But so, by the way, was its electricity demand. Although the implications were little understood, the impact would soon not only shake California but would be felt throughout the United States and in the rest of the world. It would also starkly dramatize fundamental realities of electric power.


The Quest

By the 1990s the regulatory bargain that had long been the foundation of the electrical power business in the United States was more than half a century old. Electric power prices were established not in the marketplace but rather by a state’s public utility commission (PUC), in accord with the model originally promoted by Samuel Insull. They did so by allowing utilities to pass on, in their rates to consumers, the cost of service—that is, the cost of everything, including plants, fuel, and operations, plus an additional sum that was the permitted profit. The PUC would then decide how those costs were to be allocated in terms of the prices paid by the different classes of customers—residential, commercial, and industrial.

On their side of the bargain, the utilities were required to provide reliable service, universally available, at reasonable cost. They would ensure that the lights stayed on. If the power went off because a storm had knocked down the power lines or a blizzard had disrupted the system, the linemen would be out as fast as their trucks could roll, and the utility would scramble to get the power back on. This was all based on the concept of natural monopoly. Competition was definitely not part of the bargain.



RATE SHOCK

But change was coming. For many years electricity prices in the United States had been declining dramatically—between 1934 and 1970, by an astonishing 86 percent. That was testament to the impact of scale, technology, and lower costs that came with higher volumes. But in the 1970s and 1980s, prices abruptly turned up: New power plants—whether nuclear or coal—were proving to be expensive, sometimes very expensive. Costs were also driven by the 1978 Public Utility Regulatory Policies Act (PURPA). That law had forced utilities to buy power at high “avoided” costs from small-size generators of renewable power—largely wind and small hydro plants.

Avoided costs were a very interesting concept: It was an estimate of how much the same amount of power would cost were it generated from an oil- or gas-fired facility. It was not an actual price, but an expected price sometime in the future. These avoided costs were often pegged at stratospherically high anticipated oil prices. But in the 1980s, oil and gas prices had declined, meaning that PURPA avoided-cost power prices were far above actual market costs. All this meant that consumers, in many parts of the country, were hit by “rate shock”—steep rises in electricity rates, as the costs from new nuclear and coal plants, and from the PURPA machines, were passed on to them in their monthly bills.

Residential consumers may have complained about their bills, but there was little they could really do, aside from being more careful in their use of electricity. For industries that used a good deal of electricity, rate shock hit their bottom line and made them less competitive against companies in lower-cost states. They needed to do something to bring down their power prices. Their answer was to promote what was variously called “deregulation” or “restructuring,” which would allow them to find a way to buy cheaper power from someone else rather than more expensive power from their local utility. In a historic shift, that would lead toward electric rates being determined in a marketplace, not by the PUC—that is, toward competition in what had heretofore been assumed to be a natural monopoly. Getting deregulation right, however, would not prove so easy for electric power. Even competitive markets are, after all, not exactly free. They depend, crucially, on the rules by which they operate.

Deregulation was made even more compelling by the appearance of a shift in the fuel mix for electric power. As new nuclear plants came online they contributed a growing share of power generation—leveling out at 20 percent of supply nationally. But the big growth was in coal. In the fifteen years following the natural gas shortages of the mid-1970s, coal consumption in electric generation literally doubled and was responsible for about 55 percent of all electricity produced in the United States. Coal’s great advantage was that it was abundant and it was a domestic fuel.

But natural gas too was now abundant, and it too was also domestic. It was a fuel well suited for the deregulated power business. The gas bubble, the long-lasting surplus of natural gas following its deregulation, made gas cheap. In the face of the changing economics, the prohibition on the use of natural gas in power generation was clearly irrational, and the ban was lifted. At the same time, a new generation of highly efficient combined-cycle gas turbines—based on engines designed for jets, combined with steam turbines that run on “rejected heat”—started to enter the market. Gas plants were much less costly to build than coal and nuclear power plants, they could be constructed more quickly, and natural gas was a cleaner fuel than coal.

Thus electricity from a new gas-fired power plant was cheaper than that from a nuclear power plant that had been constructed in the 1970s—or, for that matter, a coal plant that was built in the 1980s. But the existing regulatory system did not easily allow buyers to get access to the lower-cost power. At least not yet.



TOWARD MARKET

Thinking about the role of governments and markets was, at that time, undergoing a decisive change around the world. Increased confidence in markets stimulated a movement toward deregulation and privatization. In the United States, financial services were deregulated in the 1970s, after which stockbrokers could offer lower rates to customers if they wanted to. The airline industry was also deregulated, a transformation championed by Senator Edward Kennedy, Senate staffer (and later Supreme Court justice) Stephen Breyer, and the regulatory economist Alfred Kahn. As a result, the federal government stopped regulating everything from the cost of airline tickets to the size of sandwiches that could be served on planes. And, as already observed, price controls on oil as well as natural gas were abandoned in the 1980s. This same shift was even more evident in other countries. State-owned companies in Western Europe were privatized; communism collapsed in the Soviet Union and Eastern Europe; and both China and India opened up to the world economy.2

But what laid out the path for the United States was what happened in the United Kingdom. Of all the privatizations set in motion in Britain by Prime Minister Margaret Thatcher’s market revolution, the biggest was that of the Central Electricity Generating Board (CEGB). The British power industry had been nationalized after World War II to end wasteful fragmentation, modernize the industry, and give virtually everyone access to the benefits of electric power. All of this it had done. It was an engineering-driven organization whose mandate was “to keep the lights on no matter what the cost.” The downside was that, in the process, it was racking up big losses and was in constant turmoil with trade unions.

Beginning in 1990, the British industry was privatized. “Again and again I insisted that whatever structure we created must provide genuine competition,” said Prime Minister Thatcher. The government broke the generating part of the CEGB into three private companies. These generation companies competed both among themselves and against new independent generating companies to sell electricity into the wholesale market. As for the retail side of the market, the government converted “area boards,” which distributed electricity to the customers in a particular part of the country, into independent companies. It then gradually introduced competition among these companies.3

The UK’s approach became the global model of how to bring market competition into electric power. It was a forceful and compelling model—including for the United States. Members of the Federal Energy Regulatory Commission, visiting Britain on a study trip, were much impressed by how the once-monolithic state-owned monopoly had been turned into a competitive business, with prices constantly changing in response to supply and demand. The FERC decided to open up the U.S. industry to competition as fast as possible. “The Brits’ enthusiasm about the early successes of their restructuring definitely emboldened us to embark upon restructuring,” said Elizabeth Moler, the FERC chair at the time. “We learned from both the successes and failures of the U.S. natural gas restructuring and from what the British did.” Other visitors from the U.S. power industry made the same trek to Britain and came back with similar conclusions. This seemed to be the new future for electric power.4



ENTER THE MERCHANT GENERATORS

In the United States, policy at both the federal and state level now began to move toward deregulation. The biggest change was to allow new competitors to get into the generation business and sell their power either to utilities or to end users. And since electricity is an undifferentiated commodity, then new entrants would compete on price. The big idea here was to drive down costs through competition. And in the process, these new entrants were determined to disprove Insull’s dictum that competition was an “unsound economic regulator.”

The Federal Energy Policy Act of 1992 specifically allowed these newcomers to sell electricity into interstate transmission lines regulated under federal laws. These were given the name “merchant generators” because they did not own the wires and distribution system but rather would sell to those who did. The merchants might be either independent companies or subsidiaries of utilities in some other part of the country. Whichever, they either built new power plants or bought existing ones from utilities. These merchants were selling into second-by-second electronic markets. To implement the competitive intent of the 1992 Energy Policy Act, the Federal Energy Regulatory Commission promoted “wheeling.” That allowed local utilities in one part of the country to contract with a cheaper generator in another part and wheel—that is, transport—the less expensive power over wires across the United States.

Both merchant generators and traditional utilities realized that they could become more competitive by fueling the new power plants with cheap natural gas. That set off a mad “dash to gas” across the country. In just six years, between 1998 and 2004, the United States added an enormous amount of new generating capacity—equivalent to a quarter of all the capacity that had been built since Edison’s Prince Street station in 1882! Over 90 percent of that capacity burned natural gas. Although not recognized at the time, the dash to gas was also a very big bet on cheap natural gas prices. It led to the overbuild—which produced much more generating capacity than was necessary.

Yet by the end of the 1990s, cheap gas was disappearing. Prices started to rise sharply once again. The wager on cheap natural gas prices proved costly. Many of the independent merchant generators that had made that bet were caught out. Some went bankrupt. Nowhere did the bet on gas go so badly, or more disastrously, than in California.



CALIFORNIA’S STRANGE RESTRUCTURING

A power crisis that erupted in California in 2000 threw the state into disarray, created a vast economic and political firestorm, and shook the entire nation’s electric power system. The brownouts and economic mayhem that rolled over the Golden State would have been expected in a struggling developing nation, but not in the state that was home to Disneyland, and that had given birth to Silicon Valley, the very embodiment of technology and innovation. After all, California was, if an independent country, the seventh-largest economy in the world.

What unfolded in California graphically exposed the dangers of misdesigning a regulatory system. It was also a case study of how short-term politics can overwhelm the needs of sound policy.

According to popular lore, the crisis was manufactured and manipulated by cynical and wily out-of-state power traders, the worst being Enron, the Houston-based natural gas and energy company. Its traders and those of other companies were accused of creating and then exploiting the crisis with a host of complex strategies. Some traders certainly did blatantly, and even illegally, exploit the system and thus accentuated its flaws. Yet that skims over the fundamental cause of the crisis. For, by then, the system was already broken.

The California crisis resulted from three fundamental factors: The first was an unworkable form of partial deregulation that explicitly rejected the normal power-market stabilizers that could have helped avoid or at least blunt the crisis but instead built instability into the new system. The second was a sharp, adverse turn in supply and demand. The third was a political culture that wanted the benefits of increased electric power but without the costs.

This was not the way it was supposed to be. California enacted deregulation, or restructuring, as it was more commonly called, in 1994. At the time, the state was in a bad way economically. Unemployment hit 10 percent, real estate was a bust, and more people were moving out of the state than were moving in. Spending for defense, one of the state’s main industries, had been cut back sharply with the end of the Cold War, and Sacramento was running big deficits. High electricity prices were partly blamed for the state’s economic slump. Manufacturing companies were fleeing California, in part because of high energy costs, taking jobs with them. Meanwhile people did not worry much about increases in electricity demand. After all, in 1993 demand hadn’t grown at all.

Competition, it was thought, would bring down the price of power, helping to revive the state’s fortunes. California’s brand of deregulation was fashioned out of a complex negotiation and a great compromise, involving stakeholder democracy, although the stakeholders varied much in terms of their understanding of how power markets worked. Politically, the great compromise worked brilliantly; the deregulation bill sailed through the state legislature in 1996 with not a single dissenting vote and was signed into law by Republican Governor Pete Wilson.5

Under California’s restructuring, consumer advocates got lower prices; big industrial customers would get access to cheaper power. But in a deregulated market traditional utilities would be stuck with legacy costs of their contracts for PURPA power and the cost overruns on building other new plants—such as the Diablo Canyon nuclear facility on the central California coast that was caught in a regulatory morass and had ended up costing about $11.5 billion. These costs would prevent them from being competitive. The legislation gave the investor-owned utilities the relief they needed—various ways to extricate themselves from the burden of what was called “stranded costs.” They too embraced restructuring. As for the new entrants, the merchant generators, there were two great prizes. One was the ability to sell power into the large California market; and the other, the opportunity to buy the power plants that the state was strongly “encouraging” the utilities to sell. “Every major group got what they wanted most,” said Mason Willrich, who later became chairman of the California grid operator. “But no one connected the dots.”

This restructuring was an extraordinary edifice in terms of political support. The entire California congressional delegation signed a letter urging the Federal Energy Regulatory Commission not to use federal authority to interfere with the plan. The political forces were so finely balanced that any alteration could cause the whole edifice to come tumbling down.

The objective was to dismantle the traditional natural monopoly in electric power. The new system, in the words of economist Paul Joskow, was “the most complicated set of wholesale market institutions ever created on earth and with which there was no real world experience.” It yoked together a deregulated market with a regulated market. Some compared it to having a bridge designed by consensus. The subsequent collapse of this particular bridge would demonstrate the hard-earned lessons of power markets.6



THE IRON CURTAIN

Wholesale markets were deregulated—along with the markets in which the generators that operate the power plants that sold power to utilities that distributed it to customers. Prices in those markets would be free to fluctuate, in response to supply and demand. But the traditional retail markets—those between the utilities and their customers (home owners, factories, offices, and others)—were not deregulated. This meant that these consumers were to be protected—insulated—from rising prices. They, after all, were the ones who cast votes for governors and state legislators.

The result was to build an economic iron curtain between the wholesale and retail markets. The ultimate consequences would be devastating. Changes in wholesale markets, which would reflect those changes in supply and demand, would not flow as price signals into the retail markets—that is, to consumers. Thus consumers would have no incentive, no wake-up call, to make adjustments that would normally happen in response to rising prices (buying a more efficient air conditioner, putting a little more insulation in their walls). They would not get the message because it would not be transmitted to them.

In order to make the wholesale system function like a competitive market, the state’s utilities were ordered to shear themselves of a substantial number of their in-state power plants and sell them to other companies, which would operate them and in turn sell electricity into the open market. Here was the dissolution of the formerly vertically integrated utility—the kind of utility invented by Samuel Insull, which traditionally combined generation, transmission, and distribution within the borders of a single company. Many of these new merchant generators were out-of-state companies, a number of which had arisen during the era of deregulation.

Other key elements in the deregulation would make matters still worse. The first is that the scheme did not worry about capacity. Electricity is different from other commodities. Oil can be stored in tanks; grain, in silos; natural gas, in underground caverns. But electricity is the instantaneous commodity; here one second, gone the next. It is a business that operates with virtually no inventory.

Therefore, a “reserve margin” is needed. Reserves are the stabilizers, the extra production capacity—above projected peak demand—that can be called into operation in order to avoid a shortage. Maintaining such a margin is a basic rule of operations—the power system in its entirety needs to be large enough not just to cover average demand but the extremes of demand, with an additional reserve to allow for accidents or malfunctioning equipment. A state like California, which depends upon hydropower for part of its electricity, needs about a 20 percent reserve margin—20 percent extra capacity—in order to be ready for a spike in demand brought about by a heat wave or a drop in hydropower production because of drought. California’s new system, however, included no incentive or encouragement to ensure sufficient extra capacity to help deregulation work. At some points during the crisis, the reserve margin got as low as 1 percent—which was frighteningly low—essentially no reserve margin at all.

As part of the deregulation compromise, California also forbade utilities from signing with generating companies any long-term contracts for electricity supply. This was a truly fundamental flaw. It is standard practice—and, indeed, good practice—to hold a portfolio of contracts, some that go out just a few months, others that go out for a couple of years. This kind of portfolio helps to provide a buffer against major surges in market prices that would result if capacity became tight. But since the California model assumed that prices would remain low forever, the state would not permit long-term contracts, which, while more expensive than the spot prices at the time, would have provided an insurance policy for consumers if spot prices shot up.7

“We had to sell our power plants, which was the heart of a reliable power system, but we were forbidden from doing long-term contracts,” said John Bryson, who was CEO of the parent of Southern California Edison, one of the state’s three major utilities. “Utilities have an obligation to serve their clients, but now there was no way for us to source power except from a spot market.”

California’s restructuring, with its disconnect between wholesale and retail markets, and its prohibition of the buffers against rising prices, meant that an enormous amount of risk was unintentionally being built into the new system for supplying electricity to the most populous state in the nation. One report did warn in 1997 that this system was “likely to lead to extended periods of low prices followed by periods of very high prices, as supply shortages and surpluses develop. Price volatility will not be conducive to a smooth transition to competition.” But few were listening.

The system would work well so long as no major changes in the supplydemand balance occurred and prices stayed down, which would have occurred if California had remained mired in an economic downturn. But how quickly markets can change.

“Deregulation, California-style” officially went into effect in 1998. By then, California’s economy was already starting to recover, real estate was sizzling again, and the Internet was beginning to take off, giving a big boost to the Bay Area. All this was reflected in electricity consumption and a radical shift in the balance of supply and demand. Over a six-year period, California’s economy grew by 29 percent; its electricity use by 24 percent. But no significant new electricity generation was added. Indeed, after 1997 the state’s capacity actually went down as some older, inefficient plants were retired.8

California was arguably the most difficult state in the Union to site a new project; the process was time consuming and costly, the environmental review process was open-ended, and local community opposition could usually prevail. So for the additional supplies it needed, California drew on other western states and British Columbia—turning them into a sort of vast energy farm to feed its growing economy. That was fine as long as the out-of-state power was abundant and cheap. But states like Arizona were growing fast, and thus they were consuming more and more of their own power production. The year 1999 had been great for hydropower in the Northwest and British Columbia: mild winter, cool summer, and a lot of rain—which meant a lot of cheap hydropower.



“IT WAS MADNESS”

But 2000 was something else. A drought in the Northwest and Canada curbed the availability of hydropower. Meanwhile, power demand was surging in California, partly because of a hot summer, partly because of economic growth. More natural gas had to be pulled into power production. But natural gas supplies were tightening, and the price started to go up, which meant that the price of additional electricity—made from natural gas—also started to rise sharply.9

During the hot summer of 2000, the staff at the agency that managed the state’s power grid frantically shopped for additional power supplies. “We simply couldn’t make enough phone calls,” said one of its managers. “It was a Turkish bazaar. It was madness.” It was at this point that the state began to experience the first convulsions from the physical shortages of electricity. Utilities had to source power “on an hour-to-hour basis,” said John Bryson. And “no one knew what price would be bid in the next hour.” Moreover, the new market had been structured so that utilities had no visibility beyond an hour on the availability of power.

Many businesses had “interruptible” contracts, which meant that in exchange for lower rates they could be cut off if electricity went short. A steel company east of Los Angeles, which had had its electricity interrupted only once over fifteen years, now found its electricity cut off eighteen times in 2000—with only fifteen-minutes’ notice to shut down all its operations. “We cannot run a business like this,” the president of the company declared. Infrastructure constraints in transmission, particularly between Northern and Southern California, added to the woes. The system was clearly breaking down. Yet still the state government did not react.

The crisis worsened as the year progressed. Utilities were spending five times as much to buy electricity in the wholesale market as they could sell it to retail customers for—an obviously untenable situation. But they could not do much about it. They were certainly not allowed to raise rates. Seven times Southern California Edison requested permission from the state’s public utility commission to gain protection by signing long-term power-supply contracts, and seven times the commission said no.10



“PIRATES” AND “PLUNDER”: CALIFORNIA AT SEA

By the beginning of 2001, the state was in the grip of a full-blown electricity crisis. It was now evident to everyone that the market was broken. As the crisis unfolded, delegations from as far away as Belgium and Beijing journeyed to America’s largest state to learn what had gone wrong. And plenty was going wrong. Utilities were accumulating tens of billions of dollars of losses. Governor Gray Davis announced that the state was living through an “energy nightmare,” produced by “price gouging” by “out-of-state profiteers” who were holding California “hostage.” He earnestly appealed to Californians to save electric power by putting their computers “on sleep mode” when not in use. He also threatened that the state would seize ownership of generating plants and go into the business of building power plants itself. The merchant generators, he declared, “have brought the state to the very brink of blackouts.”11

It was not just electricity that was in short supply. So was the political leadership and will to bring people together and adjust what has been described as the “extremely complex and untested system” that had just been put in place. One obvious answer would have been to permit price signals to work and allow at least some moderate increase in the retail rates paid by homeowners. Davis himself recognized that reality. “Believe me,” he said at one point, “if I wanted to raise rates, I could solve this problem in 20 minutes.” But he was adamant. He would not do that.

Instead he blamed everyone else, ranging from the utilities to the federal government. But, by far, his greatest wrath was reserved for companies headquartered out of state, particularly those in Texas, that had bought many of the generating plants and that were trading power. They were, he said, “pirate generators” out for “plunder.” 12

This was not an environment conducive to collaboration and solutions. The crisis worsened. Spot prices for electricity were, on average, ten times what they had been a year earlier. State regulators began to ration power physically, which meant rolling blackouts. Meanwhile, as wholesale power prices went up, the financial positions of the states’ utilities became even more dire. Because of that iron curtain between the deregulated wholesale market and the regulated retail side, utilities were buying wholesale power for as much as $600 per kilowatt hour but were able to sell it to retail customers at a regulated rate of only about $60 per kilowatt hour. As one analyst put it, “The more electricity they sold, the more money they lost.”13

The state was in an uproar; its economy, disrupted. In April 2001, after listening to Governor Davis threaten the utilities with expropriation, the management of PG&E, the state’s largest utility, serving Northern California, decided that it had no choice but to file for bankruptcy protection. San Diego Gas & Electric teetered on the edge of bankruptcy. The management of one of the state’s major utilities hurriedly put together an analysis of urban disruption to try to prepare for the distress and social breakdown—and potential mayhem—that could result if the blackouts really got out of hand. They foresaw the possibility of riots, looting, and rampant vandalism, and feared for the physical safety of California’s citizens.

But Governor Gray Davis was still dead set against the one thing that would have immediately ameliorated the situation—letting retail prices rise. Instead he had the state step in and negotiate, of all things, long-term contracts, as far out as twenty years. Here the state demonstrated a stunning lack of commercial acumen—buying at the top of the market, committing $40 billion for electricity that would probably be worth only $20 billion in the years to come. With this the state transferred the financial crisis of the utilities to its own books, transforming California’s projected budget surplus of $8 billion into a multibillion-dollar state deficit.14



“CRISIS BY DESIGN”

Many joined Davis in fingering the power marketers and merchant generators as the perpetrators of the crisis. They were charged with engaging in various trading and bidding strategies that took advantage of the crisis and with taking plants off-line to push up prices. But a Federal Energy Regulatory Commission review concluded that it “did not discover any evidence suggesting that” merchant generators were scheduling maintenance or incurring outages in an effort to influence prices. Rather the companies appeared to have taken whatever steps were necessary to bring the generating facilities back on line as soon as possible. Moreover, it turned out that publicly owned municipal power companies, led by the Los Angeles Department of Water and Power, were among those selling the highest-priced electric power.15

Postcrisis investigations revealed rapacious behavior on the part of some of the energy traders, who were middlemen between generators and utilities. This was particularly true of those from Enron, who wielded trading strategies with such vivid names as “Fat Boy,” “Ricochet,” and “Death Star.” Phone records captured their inflammatory conversations as they pursued their trading strategies through the crisis. The records also indicated that at least some of them were deliberately manipulating the movement of electricity supplies in and out of the state to try to drive up prices. Subsequently three traders admitted to such and pleaded guilty to conspiracy to commit wire fraud. By then, Enron itself was long gone. It was done in by a combination of factors: almost $40 billion of debt and obligations that it could not fund, accounting ruses and tricks that hid its true financial position and that depended upon a high stock price to avoid coming undone, a propensity to woefully overspend on investments and then not manage them well, and personal enrichment. When Enron filed for Chapter 11 in December 2001, it was the largest bankruptcy in American history.16


What was the impact of the traders on the crisis? One of the leading scholars on the topic, James Sweeney of Stanford University, concluded that the “amount and use of market power is unknown but subject to massive debate.” But the ability to wield market power in a very tight market, he added, would have greatly decreased had the state permitted retail prices to go up and allowed utilities to enter into long-term contracts. Trading in electric power goes on every day across the country without a crisis. That the traders sought to take advantage and make money out of the political and regulatory debacle in California is clear. But that they were not the fundamental reason for the crisis is also clear. The causes reside in the way the power market restructuring was designed in the face of shifting supply and demand.17

Indeed, what unfolded in California was what has been called a “crisis by design.”

By the summer of 2001 the crisis was easing. The state authorities had finally succumbed to economic reality and allowed retail prices to rise some. The expected happened: consumers reduced their consumption. In addition, the weather moderated compared with the previous year, and new electricity-generating capacity started to enter the system.

But it was not until November of 2003 that Governor Davis officially pronounced the crisis over. By then so was his own political career. The state’s voters had just turned him out of office in a special election—only the second governor in the history of the United States to be so dismissed. His successor was Arnold Schwarzenegger.

The Terminator became the Governator. His inauguration was a global event, attended by 650 journalists. Schwarzenegger inherited a $25 billion deficit, much of it the direct and indirect result of the power debacle. “California is in a crisis,” he said after he took the oath of office. “We have the worst credit rating in the country.” But, recalling his days of championship weight lifting, he declared with fortitude, “We are always stronger than we know.”

Gray Davis offered his own explanation for what had gone wrong: “I was slow to act during the energy crisis.” As he left office, he ruefully offered a lasting truism: “It’s a bummer to govern in bad times.”18



IN THE AFTERMATH

Almost a decade after the California crisis first began, the chairman of the Federal Energy Regulatory Commission offered his own judgment: “The California crisis was not a failure of markets,” he said. “It was a failure of regulation.”19

But still, in the rest of the country, in the aftermath of the California electricity crisis, the brakes were slammed on on the movement toward deregulation. The result was to leave the United States with an “unintended hybrid” system. A map of the country reveals a patchwork among the states. About half of the utilities in the country are traditionally regulated, and half are subject to varying degrees of market competition. The utilities in the latter category own only small amounts of generation of their own within their service territories, or none at all. They are in the wires business—transmission and distribution—and thus buy electricity from generators. Yet underlining the hybrid nature of the system, several utilities today hold a portfolio of power plants, some operating in regulated markets and others operating in competitive markets.20 The markets open to retail competition are clustered in the Northeast, the Midwest, and Texas, while the Southeast is characterized by traditional regulation.

At the same time, at the wholesale level competitive markets for electricity have been expanding apace over the past decade. Even as California’s system flopped, other markets demonstrated what a well-designed power market actually looks like. The PJM Interconnection, which stretches from Pennsylvania and Washington, D.C., all the way to Chicago and includes all or parts of fifteen states, is one such market. It is the largest competitive power market in the world, serving 51 million people. PJM has deep roots, going back to a power pool that was established between Pennsylvania and New Jersey in 1927 to bring greater stability in electricity supply to the region. Today PJM operates both the high-voltage transmission system in its region and a competitive wholesale market, bringing buyers and sellers together on a real-time basis.

As for California, the state has kept its wholesale electricity markets open to competition. It now permits long-term contracts. In 2009, after several years of work, the state’s Independent System Operator (ISO) introduced a new market design. It incorporated experience from PJM and other systems as well as the painful lessons from what Mason Willrich, the chairman of the ISO, called the “flawed, flawed market” that had been put in place in California in the 1990s. This new design was intended to better reflect the true cost of electricity, including the cost of transmission congestion in the grid, and, with appropriate market monitoring, deliver the benefits of competition, rather than design a crisis.21

The major question today for electric power is no longer market design—regulation versus deregulation. Rather, it is fuel choice. Whatever the setup in different parts of the country, the United States faces the same question about the future of its electricity supply as do many other countries: What kind of generation to build? This struggle over fuel choice is not just about meeting today’s needs, but also about how to meet expected growth in demand—and new environmental objectives. Coal, nuclear power, and natural gas will all be part of the picture, both in the United States and around the world. Each, however, comes with its own constraints.


20

FUEL CHOICE

The prospects for electric power in the twenty-first century can be summarized in a single word: growth. Electricity consumption, both worldwide and in the United States, has doubled since 1980. It is expected, on a global basis, to about double again by 2030. And the absolute amount of the doubling this time will be so much larger, as it is off a much larger base. An increase on such a scale is both enormous and expensive. The cost for building the new capacity to accommodate this growth between now and 2030 is currently estimated at $14 trillion—and is rising. But that expansion is what will be required to support what could be by then a $130 trillion world economy.1

Such very big numbers generate very big questions—and a fierce battle. What kind of power plants to construct and, then, how to get them built? The crux of the matter is fuel choice. Making those choices involves a complex argument over energy security and physical safety, economics, environment, carbon and climate change, values and public policy, and over the basic requirement of reliability—keeping on not just the lights but everything else in this digital age. The centrality of electricity makes the matter of fuel choice and meeting future power needs one of the most fundamental issues for the global economy.

In the developing world, rising incomes and urbanization are driving demand. China literally doubled its electric power system between 2006 and 2010, and is likely to double it again in just a few years. India’s power consumption is expected to increase fivefold between 2010 and 2030. The challenge for developing countries is to increase reliability, ensure that power supplies keep up with economic growth, and avoid shortfalls that constrain growth. It is also to deliver electricity to the 1.6 billion people who have no access at all to electricity but instead burn kerosene or scrounge for wood or collect dung. Billions more receive electric power only part of every day, interrupted by shortages and blackouts, taking a toll on both daily life and economic growth.

In the developed world, increasing consumption is driven by the everexpanding role of computers, servers, and high-tech electronics. This process is so increasingly pervasive as to be taken for granted. To take a simple example, writing a book three decades ago was done on a manual typewriter, using carbon paper for copies; and research meant trips to the library and wandering through the stacks. Now the book is written on a computer, multiple drafts are produced on an electronic printer, much of the research is done over the Internet, and the final product is increasingly as likely to be read electronically as on the printed page.

In the United States, electricity consumption is expected to rise at about 1.4 percent per year. That sounds modest when compared with some developing countries today—or to the almost 10 percent growth in the 1950s in the United States when Ronald Reagan was extolling the “all-electric home.” But over 20 years, it means an absolute growth in demand of about a third. That is equivalent to about 150 new nuclear reactors or almost 300 new standard-size coal-fired plants. And every single new facility means a choice over fuels—and a wrangle over what to do.



MAKING POWER

Electricity is flexible not only in what it can be used for but also in terms of how it can be made. It is not a primary energy resource in itself, unlike oil or natural gas or coal. Rather it is a product generated by converting other resources. And it is very versatile in the making. Electricity can be made from coal, oil, natural gas, and uranium; from falling or flowing water; from the blowing wind and the shining sun. Even from garbage and old tires.2

Electric power is a classically long-term business. A power plant built today may be operating 60 to 70 years from now. It is also a big-ticket business—in fact, it is the most capital-intensive major industry in the United States. Fully 10 percent of all capital investment in the United States is embedded in the power plants, transmission lines, substations, poles, and wires that altogether make up the power infrastructure. A new coal plant may cost as much as $3 billion, assuming it can be built in the face of environmental opposition and uncertainty about carbon regulation. A new nuclear power plant may be double that—$6 billion or $7 billion or even more. Assuming the nuclear plant can make its way through the permitting process, it can take a decade or two to site and build, and its lifetime may ultimately extend into the next century.

Yet the rules, the politics, and the expectations keep changing, creating what economist Lawrence Makovich calls “the quandary.” The business itself is still subject to alternating currents of public policy—and dramatic swings in markets and popular opinion—that lead to major and abrupt changes in direction. The focus on climate change grows more intense. So does antipathy to building new plants. And it is not just the prospect of new coal or nuclear plants that engenders environmental opposition. Wind turbines and new transmission lines can also raise the ire of local publics.

How, in such circumstances, to meet the needs and close the gap between public expectations and what can actually be built? Both wind and solar still have to prove themselves on a systemic scale. (To each of these we will return later.) Efficiency and the smart grid could reduce or flatten out the growth curves.

The place to start is with the current mix. In the United States, coal’s share, once almost 55 percent, has declined somewhat to about 45 percent of all electric-power generation. Natural gas is next, at 23 percent and rising; and nuclear, at 20 percent. Hydropower is 7 percent; wind is almost 2 percent; and solar does not register. Over the decades, oil has been squeezed down from over 15 percent to just 1 percent. That is why, despite what is often said, increased renewable or nuclear power would have very little impact on oil use unless accompanied by very widespread adoption of electric cars that plug into the electric grid.

The other major developed regions are somewhat less reliant on coal. In Europe, nuclear, coal, and natural gas are all tied at 25 percent each. Hydro is 15 percent. Wind and oil are virtually neck and neck, at 4 and 3 percent respectively. Japan is 28 percent coal and 28 percent nuclear, followed by natural gas at 26 percent. Oil is 8 percent; hydro, 8 percent. Wind is negligible. In all three regions, solar has yet at this point to appear in any statistically significant way.

THE FUEL MIX

Electricity generation in 2009 by fuel type, in millions of gigawatt-hours


The Quest

Source: IHS CERA


China and India, the world’s most populous countries, rank first and third, respectively, in coal consumption, with the United States placing second. In China about 80 percent of electricity is produced from coal, while this figure is 69 percent for India. Hydropower accounts for 16 percent of electricity production in China and 13 percent in India.3

The choices on fuel mix are determined by the constraints and endowments of region and geography. Thus, over 80 percent of Brazil’s electricity is hydropower. The choices are also shaped by technology, economics, availability, and the three Ps—policy, politics, and public opinion.

When it is all added up, however, on a global basis, a triumvirate of sources—coal, nuclear, and natural gas—will remain dominant at least for another two decades. As one looks further out in the years ahead, however, renewables grow, and the mix becomes less clear—and much more subject to contention.



COAL AND CARBON

Today 40 percent of the world’s electricity is generated from coal. Coal is abundant. The United States holds over 25 percent of known world reserves, putting it in the same position in terms of coal reserves as Saudi Arabia with respect to oil reserves. A new generation of ultra-supercritical power plants—operating under higher temperatures and pressures—are coming into the fleet. They are much more environmentally benign than the plants that would have been built a generation ago, and because of their greater efficiency they can emit 40 percent less CO2 for the same amount of power as a plant built a couple of decades previously. Today most scenarios have coal use growing on a global basis.

Between 1975 and 1990 the output of coal-generated electricity literally doubled in the United States. In those years, government policies restricted alternatives, and coal became the reliable, buildable generation source. Policies also promoted coal as a secure energy source and one not subject to political disruption. For many countries, that is still the case. But not in the United States and Europe, where carbon emissions are a major issue. Based on the chemical composition of coal and natural gas, and the greater efficiency of a combined-cycle gas turbine, coal produces more than twice as much CO2 per unit of electricity as does natural gas.

In 2011 about 25 coal-fired plants were under construction in the United States. But political and regulatory opposition to coal on grounds of global warming has mounted to a level that makes it difficult to launch new conventional coal plants. Permits for coal projects already under construction are being challenged, and a number of new coal power projects have been canceled or delayed in the United States—even after entering advanced stages of development. Some environmental groups have made opposition to building new coal plants a top priority.4

At the same time, concerns about the health impact of emissions, aside from CO2, and water usage are leading to new regulations. These new rules will significantly increase the operating costs of existing coal plants. The expected price tag for compliance with such new environmental regulations will likely accelerate the retirement of a number of U.S. coal plants, though the pace is the subject of much debate. These new environmental requirements create a formidable gauntlet for any proposed new plant to run in order to make it through the regulatory approval process.5



CAPTURING THE CARBON

What then can be done to reconcile coal and carbon? That challenge preoccupies much of the power industry. Over the last 20 years—pushed by regulation and facilitated by the use of markets—the power industry and the equipment manufacturers that serve it have done a remarkable job in eliminating pollution. Some 99.9 percent of particulates, 99 percent of sulfur dioxide (SO2), and 95 percent of nitrogen oxides (NOx) emissions have been banished by new coal plants. But the amount of carbon, embedded in the carbon dioxide emitted by burning coal, is an altogether different and a much more intractable problem.6

The most prominent answer today is carbon capture and sequestration (or storage), better known by the shorthand CCS. To “sequester” something is to isolate it or set it apart; the concept here is to keep carbon out of the atmosphere by capturing it and burying it underground. “CCS is the critical future technology option for reducing CO2 emissions while keeping coal’s use above today’s level,” said the MIT study The Future of Coal.

CO2 can be captured in several ways, either before or after the coal is burned. One of the various methods, the only one that could likely be adapted to an existing coal plant, is capturing the CO2 after burning the coal. For the others it would be so expensive and complicated that it would be cheaper just to scrap the existing plant and build a new one.

However it is separated out, the captured CO2 is compressed into a “super-critical phase” that behaves like a liquid and is transported by pipeline to a site where it can be safely buried in a secure underground geological formation. The CO2 would be trapped, locked in, the key thrown away, presumably forever.

In principle, the technology is doable. After all, gases are currently already captured at various kinds of process facilities. CO2 is already transported by pipeline and pumped into old oil and gas fields to help boost production. But when all is said and done, those analogies are limited—different purpose, different geological conditions, not monitored in the way that would be required, and on a much smaller scale.

The proposed system for CCS is expensive and it is complex, whether one is talking about technology or politics and the complicated regulatory maze at the federal and state levels.



“BIG CARBON”

And the scale here would be very, very large. It would really be like creating a parallel universe, a new energy industry, but one that works in reverse. Instead of extracting resources from the ground, transporting and transforming them, and then burning them, the “Big Carbon” industry would nab the spent resource of CO2 before it gets into the atmosphere, and transform and transport it, and eventually put it back into the ground. This would truly be a round-trip.

Indeed, this new CCS industry would be similar in scale to that of existing energy industries. If just 60 percent of the CO2 produced by today’s coal-fired power plants in the United States were captured and compressed into a liquid, transported, and injected into the storage site, the daily volume of liquids so handled would be about equal to the 19 million barrels of oil that the United States consumes every day. It is sobering to realize that 150 years and trillions of dollars were required to build that existing system for oil.

Though CO2 is a normal part of the natural environment, at very high levels of concentration it is poisonous. The scientific consensus is that the CO2 could be stored with little or no leakage. “Geological carbon sequestration is likely to be safe, effective, and competitive with many other options on an economic basis,” in the words of the MIT report. But it adds: “Many years of development and demonstration will be required to prepare [CCS] for successful, large-scale adoption.” What happens if there is a leak? Who is legally responsible to fix it? Who is legally liable? Indeed, who owns the CO2? Who manages it and monitors it—and how? What is the reaction of people who live above the storage? Who writes all the legal and regulatory rules that need to be created? And, fundamentally, will public acceptance, if not outright embrace, be sufficient to build and operate a vast CCS system?7

Then there is, of course, cost. Estimates today, based on experimental projects, suggest that CCS could raise the price of coal-fired electricity by 80 to 100 percent. That can work if a significant price is put on carbon either through a cap-and-trade system or a tax. Such a carbon charge would push up the cost of conventional coal generation without carbon capture, making coal-fired electricity with CCS competitive with conventional coal generation.

Still there is nothing yet close to a large-scale plug-and-play-type system for managing carbon. A few pilot projects integrating CCS with existing power plants are now under way. “The pace is insufficient,” said Professor John Deutch of MIT. It will take billions of R&D dollars and several large-scale demonstration projects and a decade and a half or more to get to the point where CCS starts to become commercial. It is an engineering challenge—“heavy-duty, large-scale process engineering... relentlessly squeezing cost and performance improvements out of large-scale chemical engineering facilities.”8

If CCS is still in the future in commercial terms, will coal plants get built in the interim? They may be designed to be “capture ready,” although it’s not clear what kind of technology and system they should be ready for. Still, CCS will likely end up part of the solution to carbon in electric power.

In the meantime, the innovation imperative for clean coal will be very strong. Perhaps some other technologies will be developed that will offer a different solution to carbon—and perhaps cheaper and less complex. Or perhaps ways will be found to transform the waste product created from burning coal into something itself of value and use. In other words, transform CO2 from a problem into a valuable commodity. The incentive is certainly there.



THE RETURN OF NUCLEAR

In a carbon-conscious world, nuclear power’s great advantages are not only the traditional ones of fuel diversification and self-sufficiency. It is also the only large-scale, well-established, broadly deployable source of electric generation currently available that is carbon free.

Nuclear power continues to make up about 20 percent of total U.S. electric generation, as in the 1980s. But how can that be possible? United States electricity consumption has virtually doubled since 1980; yet no new nuclear plants have been started in more than three decades, and the United States has about the same number of operating nuclear units today as in the middle 1980s. How could nuclear power hold on to its 20 percent share of this much larger output?

The way that nuclear has maintained its market share is through dramatic improvements in operations. In the mid-1980s, operating problems took plants off-line so that, on an annual basis, they operated at only about 55 percent of their rated total generating capacity. Today, as the result of several decades of experience and an intense focus on performance—including recruitment of those veterans from Rickover’s nuclear navy—nuclear plants in the United States operate at over 90 percent of capacity. That improvement in operating efficiency is so significant in its impact that it can almost be seen as a new source in electric power itself. It is as though the nuclear fleet were doubled without actually building any new plants.



A NEW LEASE ON LIFE

In addition to its much-improved operating and economic record, U.S. nuclear power has received another very important boost, without which it would indeed have begun to fade away. Nuclear power plants require a license to operate. This process involved years of applications and review and challenges. (It is estimated that the cost of applying for a new nuclear license today is as much as half a billion dollars.) The operating licenses—granted by the NRC, the Nuclear Regulatory Commission (and before that by its predecessor, the Atomic Energy Commission)—lasted 40 years. That length of time was based, as the NRC puts it, “on economic and antitrust considerations, not technical limitations.” Whatever happened at the end of those 40-year terms would be a turning point for nuclear power, one way or the other, and would determine if nuclear power had any future in the United States.

In 1995 Shirley Ann Jackson, a physicist from Bell Labs, became the chair of the Nuclear Regulatory Commission. Licensing was at the top of her agenda. The end of the 40 years was starting to come into view for many plants, and with that the specter that the nuclear fleet would have to be shut down and decommissioned—unless the NRC extended their licenses for another twenty years. And could it be done in time?

“Some components in plants do wear out, and they need to replaced,” Jackson later said. “If a plant is coming closer to the end of its licensing period, there is less incentive to invest, which could actually lead to premature shutdown of plants. To put it simply, we were potentially going to lose a significant amount of electricity.”9

The operating record of the nuclear industry had clearly improved, and substantially so. In fact, companies were coming to the commission to request permission for power upgrades, above what had been their maximum output, because of their increased efficiency. In support of license extension, the NRC launched a crucial new initiative to update the safety system that governed the industry, using new tools and capabilities.

To date, the NRC has given extensions to about half of the 104 commercial reactors in the United States. Without those extensions, nuclear power plants in the United States would be in the process of shutting down today. Even with extensions, there is still, in view of the growth ahead, the question of maintaining the 20 percent nuclear share of electricity. Part of that is being achieved by upgrading the permitted capacity of existing plants. But new plants will be needed as well.10



“WE ARE GOING TO RESTART”

In February 2010 the Obama administration announced loan guarantees—to the Southern Company and its partners—to build the first two new nuclear plants in the United States in many decades. It did so under the Energy Policy Act of 2005, which provides not only federal loan guarantees but also tax incentives for the first six gigawatts of nuclear capacity to come online by 2020. The units are going to be built at the existing Vogtle plant in Georgia. “We are going to restart the nuclear industry in this country,” pledged the White House energy “czar.” The first six projects are also eligible for several hundred millions of dollars of federal funds to compensate them for any “breakdown in the regulatory process” or litigation. This innovative provision was introduced to offset the way in which the regulatory processes and litigation drag on for decades, dramatically driving up costs. In effect, the federal government is insuring the developers against actions by other parts of the government that cause inordinate, expensive delays.11

This set of policies recharged the prospects for nuclear power in the United States. Some 30 new reactors were proposed, 20 of them with specific sites and reactor types. All of the 20 would be built on existing nuclear sites, alongside currently operating plants. Subsequently, many of these proposals faded away in view of the still-challenging regulatory and cost environment.

One critical objective in the new designs is to incorporate more passive safety features. Another is to standardize the reactor designs. “One of the greatest missed opportunities with our current fleet of reactors was the failure to standardize around a limited number of designs,” said Gregory Jackzo, the current chairman of the NRC. “That is not an efficient approach from a regulatory standpoint or an operational standpoint.”12

One potential solution is a new variety of small and medium reactors—or SMRs, as they are known. Because of their size they should in principle be easier to site, and their simplified designs—and use of modular units—should bring down costs and shorten construction times. Indeed, the idea is to achieve economies of scale not by size, as was traditionally the case with reactors, but by manufacturing SMRs modularly and in greater volume. At the same time, SMRs would reduce the financial risk and complexity that come with the development and construction of large reactors.13 Yet it will likely take years for SMRs to be realized technically and for their economic viability to be established.



“DEEP GEOLOGIC STORAGE”

A perennial uncertainty is how to handle nuclear waste at the end of the fuel cycle. In the United States, despite the expenditure of many billions of dollars and two decades of study, the development of a deep underground repository in Yucca Mountain in Nevada—first proposed in 1987—remained stalemated. In 2010 the Obama administration officially pulled the plug on Yucca Mountain. In France, used nuclear fuel is reprocessed; that is, the waste is treated to recover uranium and plutonium, which can be reused. The used fuel that is left over is highly radioactive waste that is vitrified—essentially turned into glass—and stored for later disposal.

Nuclear waste has, for many years, seemed an almost insoluble problem, at least politically in the United States. But when seen in relative terms, the problem of nuclear waste starts to look different. The physical amount of nuclear waste that would have to be stored is only a tiny fraction of the amount of carbon waste that would have to be managed and injected underground with a major carbon-storage program. All the nuclear waste generated by the entire civilian nuclear program would fill no more than a single football field to the height of ten yards. By comparison, the output of CO2 from a single coal plant, put into compressed form, would require about 600 football fields—and that would be just one year’s output.

Moreover, thinking has changed about the criterion that was established for “deep geologic storage”—10,000 years risk-free underground. Specifically, that requirement means that the people living near such storage would receive no more than 15 millirem of radiation a year for the next 10,000 years—equivalent to the amount of radiation that one receives in three round-trip transcontinental flights. But 10,000 years is a very long time. Going backward, it predates the rise of human civilization by several thousand years.

Is there not a different way to handle the problem? As it is, the nuclear waste, when first generated, is stored for several years in onsite pools while it cools off. A consensus is developing that the better course is to store it in specified, controlled sites, in concrete casks, with a timeframe of 100 years that would provide time to find longer-term solutions—and perhaps find safe ways to use the fuel again.

But waste ties into another, more intractable issue.



PROLIFERATION

In October 2003 a German freighter named the BBC China picked up its cargo in Dubai, in the Persian Gulf, and then made its way through the Strait of Hormuz into the Suez Canal on the way into the Mediterranean and its destination, the Libyan capital of Tripoli. The voyage appeared uneventful. But the ship was being carefully monitored. Partway through the canal, the captain was abruptly ordered to change direction and head toward a port in southern Italy. A search there revealed that the ship was clandestinely carrying equipment for making a nuclear bomb.

The interdiction actually speeded up a process that had begun earlier in the year and that would, by the end of 2003, lead Libya to begin to normalize relations with the United States and Britain, and reengage, with the global economy (until civil war erupted in Libya in 2011). In the course of so doing, Libya renounced its pursuit of weapons of mass destruction, specifically nuclear weapons, and turned over the equipment it had already received, along with detailed plans it had acquired about how to make an atomic bomb. It also paid compensation to the families on the Pan Am passenger jet that was blown up over Lockerbie, Scotland.14

The handwritten notations on the plans made abundantly clear where the nuclear know-how had come from. A network run by A. Q. Khan had promised a full nuclear weapons system to the Libyans for $100 million. Known as the father of Pakistan’s atomic bomb and celebrated as a national hero in Pakistan, Khan had stolen the designs for centrifuges while working for a company in the Netherlands. After returning to Pakistan, he had supervised the acquisition from a global gray market of the equipment and additional know-how that culminated in 1998 in Pakistan’s first atomic weapons test and turned it into a nuclear-weapons state. But as the years had gone on, Khan had also turned himself into the world’s preeminent serial proliferator, with a network that could sell weapons capability to whoever would buy it. Khan’s international network played a primary role in helping both Iran and North Korea in their quest for nuclear weapons. And Khan and his network were very open about advertising their capabilities at symposia in Islamabad and even taking promotional booths at international military trade shows.

After the interception of the BBC China, an embarrassed Pakistani government sought to distance itself from Khan. He was arrested and compelled to go on television to apologize—after a fashion. “It pains me to realize in retrospect that my entire life achievements of providing foolproof national security to my nation could have been placed in serious jeopardy on account of my activities which were based on good faith but on errors of judgment,” he said. He was put under house arrest, but then after a few years was pardoned.15

Khan’s grim specter haunts the global nuclear economy. For he graphically demonstrated not only the existence of a covert global marketplace for nuclear weapons capability but also how the development of nuclear power can also be a mechanism, as well as a convenient cloak, for developing nuclear weapons.

When it comes to proliferation, civilian nuclear power can bridge into nuclear weapons at two key points. The first is during the enrichment process, where the centrifuges can take the uranium up to the 90 percent concentration of the U-235 isotope necessary for an atomic bomb. That appears to be the route Iran is taking. The other point of risk occurs with the reprocessing of spent fuel. Reprocessing substantially reduces the amount of high-level waste that has to be stored. It involves extracting plutonium from the spent fuel, which can then be reused as a fuel in reactors. However, plutonium is also a weapons-grade material, and it can be diverted to build a nuclear device, as India did in the 1970s, or it can be stolen by those who want to make their own atomic bomb.

The great argument in favor of reprocessing is that it gets more usage out of a given amount of uranium and thus extends the fuel supply. The counterargument is that it expands the dangers of proliferation and terrorism. The risks provide the rationale for avoiding reprocessing and instead keeping spent fuel in interim storage in order to leave time for better technological answers over the next century. Moreover, there is no shortage of natural uranium.

Overall, it is clear that a global expansion of nuclear power will require a stronger antiproliferation regime. The Nuclear Non-Proliferation Treaty, implemented by the International Atomic Energy Agency, is built on safeguards and inspections, but the advance of Iran’s nuclear weapons program demonstrates the need for improving the system. But it is also clear that negotiating a new regime will be extremely difficult.

Safety would always be a fundamental concern. It was recognized that a nuclear accident somewhere in the world or a successful terrorist breach of a nuclear power plant could once again arouse public opposition and stall nuclear power development. The latest generation of nuclear reactors aims to enhance safety with simpler designs and even passive safety features. They are also intended to reduce risks of nuclear proliferation and to downsize the amount of spent fuel that needs to be stored. The next generation of reactors are intended to carry these objectives further.



NUCLEAR RENAISSANCE

Today nuclear power represents 15 percent of total world electricity. A good deal of new capacity has come on line since the beginning of the century—just not in the United States and Europe. Between 2000 and 2010, 39 nuclear power plants went into operation. Most of those were in Asia. Indeed, about four fifths of the 60 units currently under construction are in just four countries—China, India, South Korea, and Russia. China embarked on a rapid buildup to more than quadruple its nuclear power capacity by 2020 and aims to have almost as many nuclear plants by then as does the United States. Both India and South Korea are also targeting substantial growth.16

Nuclear power is also on the agenda for other countries. In December 2009 the United Arab Emirates, facing rapidly rising demand for electricity and concerned about shortages of natural gas for electric generation, awarded to a South Korean consortium a $20 billion contract to build four nuclear reactors. Cost was not the only reason. It was also because South Korean companies had built more nuclear reactors in the last several years than any other country. The UAE expects the reactors to start becoming operational in 2017.17

This expansion became known as the “nuclear renaissance.” Even in Europe, the opposition that had blocked nuclear power since the rise of the Green political parties and the days of Chernobyl seemed to be ebbing away. Finland is building a new reactor, its fifth, on an island in the Baltic Sea, although its cost overruns have become a subject of great controversy. Nevertheless, Finland has said it will go ahead with two new reactors. In Britain, climate change and dwindling supplies of North Sea natural gas opened a public discussion about building up to ten new nuclear power plants. The coalition government, led by Conservative David Cameron, reaffirmed the previous government’s commitment to nuclear power, despite the opposition of its Liberal Democrats junior coalition partner, which has a traditional European-Green orientation. In Sweden, public opinion now ranks CO2 as a bigger threat than radioactive waste. Sweden has shut down two nuclear plants, but ten are still operating, and, in fact, are being upgraded in terms of capacity. While “decommissioning” is still formally on the books, in reality, nothing of the sort is likely to happen. As a senior Swedish official put it, “decommissioning is still an official policy.” “But,” he added, “any further decommissionings are as likely in 30 years—or 300 years—as in three years.”18

Even Germany seemed set for a turnaround. In 1999 in Germany, the Social Democratic–Green coalition decided to “phase out” the country’s 17 reactors. More than a decade later, Germany remained officially committed to the phaseout of nuclear power, which currently supplies over a quarter of its electricity. But Christian Democrat Chancellor Angela Merkel conveyed her strong support for nuclear generation and called the phaseout “absolutely wrong.” In 2010 a new law extended the life of Germany’s nuclear reactors by an average of twelve years, although opposition parties vowed to challenge the extension in court.19 But the chancellor strongly reaffirmed her conviction that nuclear power needed to be part of the power mix.

France is building one massive new reactor. France accounts for about half of Europe’s total nuclear power–generating capacity. And, as it turns out, nuclear power is under some circumstances just too good a deal to pass up. Italy, like Germany, has a moratorium on new nuclear power. Despite their official opposition, both countries import a good deal of nuclear-generated electricity from the world’s largest exporter of electricity—France.20

In addition to France, the other major industrial country with a strong commitment to nuclear power was Japan. It targeted 40 percent of its electricity to be nuclear by 2020 and then aimed to go even further and derive half of its electricity from nuclear in 2030. It was a determined national commitment.

That too was part of the nuclear renaissance.



FUKUSHIMA DAIICHI

Then came the earthquake. The collision between two tectonic plates off the coast of Japan on March 11, 2011, set off the most powerful earthquake ever registered in Japan and a tsunami on a scale never imagined. The giant wave overwhelmed the sea defenses along Japan’s northeast coast, taking a terrible toll in human life.

Certainly a wave so huge had never been imagined when the Fukushima Daiichi nuclear station had begun operating four decades earlier. The complex was little damaged by the earthquake itself. As soon as the earthquake struck, the reactors “scrammed”—shut down automatically—as they were supposed to. Along with much of the power in the region, the electricity that supplied the station was knocked out, putting the complex into a precarious situation called “station blackout.” The response to that point was according to plan. The backup power system was supposed to kick in, but the tsunami had been much higher than the sea wall, and it flooded the station, including the backup generator, so that it could not operate. That meant no lights in the control room. No readings on the controls. No ability to operate equipment. And, most crucially, no way to keep the pumps working that delivered water to the reactors.

The backup power was the safety margin. When hurricanes Katrina and Rita knocked out the electric grid along the U.S. Gulf coast in 2005, the backup diesel-powered electricity kept the nuclear plants in proper operating condition until the external power could be restored. But after the tsunami, without the power to keep the pumps working, the reactors were deprived of the critical coolant they needed to moderate the heat generated by the chain reactions.

That loss of coolant was what set off the nuclear accident, which unfolded over weeks: explosions of hydrogen, roofs blown off the containment structures, venting and spread of radiation, fires, and, most critically, the partial meltdown of the nuclear cores. Workers, suited up against the radiation, working only by flashlight and listening for hydrogen explosions, risked their lives struggling to bring water into the reactors, drain out radioactive water, get the emergency power back on, and enable the control equipment to start working again. Thousands of people in the area were evacuated. As the weeks went on, the accident, originally rated as a 4, was raised to a 5 and then a 7, the highest level, the same assigned to the Chernobyl accident a quarter century earlier, although the actual effects in terms of radiation release at Fukushima Daiichi appeared to be much lower. Still, the extent of the accident was such that it was estimated that it would take six to nine months to reach what was called a “cold shutdown.” Some or all of the reactors would be damaged beyond repair and would be complete write-offs.

What was also damaged was the global prospect for nuclear power. The structural integrity of the complex had held up well in the earthquake. The accident was the result of an immense act of nature—and what proved to be poor decisions in understanding the potential size of a tsunami, protecting the site, and in positioning the backup power system. If the plant had not been flooded, the accident would almost certainly not have occurred. In addition, the Japanese governmental system was overwhelmed trying to deal with the nuclear accident. As a government report on the accident put it, “Consistent preparation for severe accidents was insufficient.”

But the fact that it did occur, and the difficulties—and time required—to get it under control, shook the structure of confidence of governments and publics around the world about nuclear power that had been built up in the quarter century since Chernobyl.

Japan itself faced what was estimated as a $300 billion cost to recover from the earthquake and the tsunami, the most expensive price tag on any natural disaster ever. The credibility of the nuclear industry was gravely injured. But nuclear power would continue to be part of Japan’s energy mix, although siting new plants will likely be even more difficult for some years, and there will be much closer scrutiny of existing plants and operations. The goal of 50 percent nuclear almost certainly will be abandoned, with greater reliance placed instead on imported LNG, increased emphasis on efficiency and renewables, particularly solar and possibly geothermal, and a stepped-up research effort.

The most dramatic turnaround was in Germany. Three days after the accident, German Chancellor Merkel disavowed the nuclear option. She ordered the closing of seven nuclear power plants at least temporarily and withdrew her support for life extension for existing plants. The accident in Japan “had changed everything in Germany,” she said. “We all want to exit nuclear power as soon as possible and make the switch to supplying via renewable energy.”21 Several weeks later, her government made it official, ordering the closing of all the German nuclear plants by 2022.

The European Union called for “stress tests” for all nuclear reactors. Other countries were more muted in their reactions. Britain said it would continue to allow work to move ahead on new nuclear plants. France reaffirmed its deep commitment to nuclear power but launched a wide-ranging safety check.

China has the most aggressive nuclear-development program in the world. Following the accident, Beijing ordered a temporary suspension in nuclear project approvals. This strengthened central government authority over nuclear development. Beijing had already been concerned about safety and execution in the breakneck speed at which provinces were moving ahead. This will likely lead to a switch to more third-generation plants, which have more built-in safety features. Nevertheless, China is likely to remain on course to add as many as 60 to 70 new nuclear plants by 2020, which would give it a nuclear fleet rivaling that of the United States in size.

In the United States, the Nuclear Regulatory Commission launched a safety review. But also in the weeks following the accident, the NRC extended the license of one nuclear plant and gave approval to the next stage of the development of the new nuclear units in Georgia. The Obama administration said it would continue to support nuclear power as it sought to incorporate lessons learned from the accident into regulations. But, within the industry, the disaster at Fukushima was causing a rethink of plans. A month after the accident, NRG, a large power-generating company, announced it was backing out of plans to build the largest nuclear project in the United States. “Look at our situation,” said David Crane, CEO of NRG. “We responded to the [federal] inducements back in 2005.” But, he continued, “you couldn’t move it forward. Nothing was going to happen except we were going to continue to spend money, month after month, which we have been doing for five years.”22

Fukushima Daiichi demonstrated again the impact that a nuclear accident can have around the world. While it did not stop nuclear power in its tracks, “nuclear renaissance” is not a term likely to be heard in the years immediately ahead. One consequence will be to tilt development of new plants to more advanced designs, which incorporate passive safety features so that, for instance, cooling in an emergency would not require electricity from backup diesel generators. Many countries will still choose to include nuclear power in their energy mix for a variety of reasons—extending from zero carbon to energy independence, to the need for base-load power, to avoiding brownouts and blackouts with all the costs that they bring. But economics will also count, and in the United States, even before Fukushima Daiichi, something else was making the competitive prospects for nuclear power more challenging. Thus was the surge of inexpensive unconventional natural gas.



POWER AND THE SHALE GALE

Natural gas is the other obvious fuel choice. The breakthroughs in unconventional gas—specifically the shale gale—hold out the prospect that very large volumes will come to market at relatively low cost. That is changing the choices and calculations for electric power. John Rowe is the CEO of Exelon, which has the largest nuclear fleet in the country. But the arrival of shale gas has changed his calculations. “Inexpensive natural gas produces cheaper, clean electricity,” he said. “Cheap gas will get you if you bet against it.” This shift in perspective and expectations could lead to the building of a significant amount of new natural gas generation.23

That possibility may remind some of the dash for gas in the late 1990s that ran right into the wall of tight supplies and rising prices and ended in distress and bankruptcies. But now the arrival of unconventional gas portends low prices and abundant supplies for many decades or even a century or more. What is also different from a decade ago is that there now exists an urgency to find lower-carbon solutions. Natural gas has also gained a new role—as the enabler of renewables, which are not always available when one wants them, or needs them most. Gas-fired generation would swing into action when the wind dies down and the sun doesn’t shine.



BUT HOW MUCH?

For all these reasons it is virtually inevitable that an increasing share of power generation will be fueled by natural gas. But how much? Some argue that the natural gas capacity that is already in place can be used to replace more carbon-intensive coal. A good part of that natural gas capacity needs to be kept available as a “peaking” or surge capacity to balance the overall power flows when demand increases, whether at six in the evening when people get home from work and switch everything on, or when a heat wave causes a sudden increase in air-conditioning use. Without this kind of flexibility, the stability of the overall transmission system would fall apart, leading to brownouts and potentially catastrophic blackouts.

But what about building only natural gas facilities for new capacity? That is not likely. A utility is looking out many decades because of the large capital costs and because of the long life of a unit being built today. It is too risky to overcommit to one approach when technology, expected fuel costs, regulation, public opinion, and ranking of risks can change sometimes with abrupt speed. Diversification is the basic strategy for protecting against uncertainty and unexpected change. Moreover, while natural gas is lower in carbon, it is not carbon free. So natural gas can help reduce emissions substantially in the short and medium term, but even it could be under pressure in a couple of decades—unless carbon capture and storage works for natural gas as well as coal-fired generation.

Still, gas usage in the U.S. power sector could increase substantially—and all the more so if power demand surges and if efficiency and renewables do not deliver on what is expected and utilities thus need to do something quickly. Gas-fired capacity is the most likely default option. This is true not only in the United States. It is also likely that natural gas–fired generation will grow significantly in Europe and in China and India if unconventional gas development succeeds in those countries.

For many years to come, the power industry will be struggling with the question of what to build and what to shut down and its overarching quandary of fuel choice.

But the decisions about fuel choice will be based not only on energy considerations but also on what has come to loom increasingly large—the climate agenda. It may seem that this concern about climate is a recent development. In fact, the focus on the atmosphere and how it works has been building for a very long time.


PART FOUR

Climate and Carbon


21

GLACIAL CHANGE

On the morning of August 17, 1856, as the first sunlight revealed the pure white cone of a distant peak, John Tyndall left the hotel not far from the little resort town of Interlaken in Switzerland and set out by himself, making his way through a gorge toward a mountain. He finally reached his destination, the edge of a glacier. He was overcome by what he encountered—“a savage magnificence such as I had not previously beheld.” And then, sweating with great exertion but propelled by a growing rapture, he worked his way up onto the glacier itself. He was totally alone in the white emptiness.

The sheer isolation on the ice stunned him. The silence was broken only intermittently, by the “gusts of the wind, or by the weird rattle of the debris which fell at intervals from the melting ice.” Suddenly, a giant cascading roar shook the sky. He froze with fear. He then realized what it was—an avalanche. He fixed his eyes “upon a white slope some thousands of feet above” and watched, transfixed, as the distant ice gave way and tumbled down. Once again, it was eerily quiet. But then, a moment later, another thundering avalanche shook the sky.1



“A SENTIMENT OF WONDER”

It had been seven years earlier, in 1849, that Tyndall had caught his first glimpse of a glacier. This occurred on his first visit to Switzerland, while he was still doing graduate studies in chemistry in Germany. But it was not until this trip in 1856 that Tyndall—by then already launched on a course that would eventually rank him as one of the great British scientists of the nineteenth century—came back to Switzerland for the specific purpose of studying glaciers. The consequences would ultimately have a decisive impact on the understanding of climate.

Over those weeks that followed his arrival in Interlaken in 1856, Tyndall was overwhelmed again and again by what he beheld—the vastness of the ice, massive and monumental and deeply mysterious. He felt, he said, a “sentiment of wonder approaching to awe.” The glaciers captured his imagination. They also became an obsession, repeatedly drawing him back to Switzerland, to scale them, to explore them, to try to understand them—and to risk his life on them.

Born in Ireland, the son of a constable and sometime shoemaker, Tyndall had originally come to England to work as a surveyor. But in 1848, distressed at his inability to get a proper scientific education in Britain, he took all his savings, such as they were, and set off for Germany to study with the chemist Robert Bunsen (of Bunsen burner fame). There he assimilated to his core what he called “the language of experiment.” Returning to Britain, he would gain recognition for his scientific work, and then go on to establish himself as a towering figure at the Royal Institution. Among his many accomplishments, he would provide the answer to the basic question of why the sky is blue.2

Yet it was to Switzerland that he returned, sometimes almost yearly, to trek through the high altitudes, investigate the terrain, and, yoking on ropes, claw his way up the sides of mountains and on to his beloved glaciers. One year he almost ascended to the top of the Matterhorn, which would have made him the first man to surmount it. But then a sudden violent storm erupted, and his guides held him back from risking the last few hundred feet.

Tyndall grasped something fundamental about the glaciers. They were not stationary. They were not frozen in time. They moved. He described one valley where he “observed upon the rocks and mountains the action of ancient glaciers which once filled the valley to the height of more than a thousand feet above its present level.” But now the glaciers were gone. That, thereafter, became one of his principal scientific preoccupations—how glaciers moved and migrated, how they grew and how they shrank.3

Tyndall’s fascination with glaciers was rooted in the conviction held by a handful of nineteenth-century scientists that Swiss glaciers were the key to determining whether there had once been an Ice Age. And, if so, why it had ended? And, more frightening, might it come back? That in turn led Tyndall to ask questions about temperature and about that narrow belt of gases that girds the world—the atmosphere. His quest for answers would lead him to a fundamental breakthrough that would explain how the atmosphere works. For this Tyndall ranks as one of the key links in the chain of scientists stretching from the late eighteenth century until today who are responsible for providing the modern understanding of climate.

But how did climate change go from a subject of scientific inquiry, which engaged a few scientists like Tyndall, which to one of the dominating energy issues of our age? That is a question profoundly important to the energy future.



THE NEW ENERGY QUESTION

Traditionally, energy issues have revolved around questions about price, availability, security—and pollution. The picture has been further complicated by the decisions governments make about the distribution of energy and money and access to resources, and by the risks of geopolitical clash over those resources.

But now energy policies of all kinds are being reshaped by the issue of climate change and global warming. In response, some seek to transform, radically, the energy system in order to drastically reduce the amount of carbon dioxide and other greenhouse gases that are released when coal, oil, and natural gas—and wood and other combustibles—are burned to generate energy.

This is an awesome challenge. For today over 80 percent of America’s energy—and that of the world—is supplied by the combustion of fossil fuels. Put simply: the industrial civilization that has evolved over two and a half centuries rests on a hydrocarbon foundation.



THE RISE OF CARBON

Carbon dioxide (CO2) and other greenhouse gases, like methane and nitrous oxide, are part of the 62-mile-high blanket of gases that make up the atmosphere. It is all that separates us from the emptiness of outer space. About 98 percent of the atmosphere is composed of just two elements, oxygen and nitrogen. While carbon dioxide and the other greenhouse gases are minute in their concentrations, they play an essential role. They are the balancers. The short-wave ultraviolet radiation of sunlight passes unhindered through all the atmospheric gases on the way to the earth’s surface. The earth in turn sends this heat back into the sky—but not in the same form in which it was received. For as the earth remits this heat and sends it back toward the sky, the planet’s mass transforms some of the short-wave radiation into longer-wave infrared radiation.

Without CO2 and the other greenhouse gases, the departing infrared rays would flow back into the vastness of space, and the air would freeze at night, leaving the earth a cold and lifeless place. But owing to their molecular structure, the greenhouse gases, including water vapor, prevent that. They trap some of the heat represented in the form of infrared rays and redistribute it throughout the atmosphere. This balance of greenhouse gases keeps temperatures within a band, not too hot or too cold, and thus making the earth habitable, and more than that—hospitable to life.

But balance is the issue that is at the heart of climate change. If the concentrations of CO2 and other greenhouse gases grow too large, too much heat will be retained. The world within the atmospheric greenhouse will grow too hot, with the possibility of violent change in climate, which will drastically affect life on the planet. A rise of just two or three degrees in the average temperature, it is feared, is all that is required to wreak havoc.

The carbon levels are captured on graphs. They show a rising line, the elevated concentrations of carbon since the beginning of the Industrial Revolution. Most of the carbon in the atmosphere is the result of natural processes. But by burning fuels, humanity is generating an increasing proportion of carbon.

Humanity’s share is growing for two basic reasons. The first is population. The world’s population has almost tripled since 1950. The equation is very simple: more people use more energy—which leads to more carbon emissions. The second is rising incomes. World GDP has also tripled since 1950, and energy use rises as incomes rise. People whose parents were cold and bundled up with extra garments now have heat. People whose parents sweltered in muggy tropical climates now have air-conditioning. People whose grandparents rarely left their towns or villages now travel around the world. Goods that were not even imagined two generations ago are now manufactured in one part of the planet and transported over oceans and continents to customers all over the globe. In order to make all that possible, carbon that was buried underground millions of years ago is unearthed, embedded in fuels and brought up to the earth’s surface, and then released into the atmosphere by combustion.

There are other major sources of emissions. Large-scale deforestation—burning forests—releases carbon, while at the same time eliminating sinks (that is, the forests) that had served to capture and store carbon. Likewise, global poverty contributes to global warming, because poor people scrounge for biomass and burn it, sending black soot into the sky. The world’s herds of livestock release methane and nitrous oxide. Rice cultivation is another big source of methane. Yet by far, CO2 is the most significant greenhouse gas volumetrically.

Scientists have taken to calling this release of CO2 the “experiment.” Once it was said in neutral tones—Tyndall’s “language of experiment”—and was shaped by curiosity, not by alarm. Now it is spoken in dire tones. For these scientists warn that mankind is experimenting with the atmosphere in a manner that could irrevocably change the climate in potentially apocalyptic ways—melting the ice caps, burying great swaths of the world’s populated coastlines under water, transforming fertile areas into dying deserts, obliterating species, unleashing violent storms that cause great human suffering—along with devastating economic repercussions so vast that no insurance premium could possibly be large enough.

Some scientists disagree. They say that the mechanisms are not obvious, that the climate has always changed, that most of the CO2 is released by natural processes, and that the rise of CO2 in the atmosphere may not be a cause of climate change but the result of other factors, such as solar turbulence or wobbles in the earth’s orbit. They are the minority.



WHY NOT TOO HOT OR TOO COLD?

The subject here is not weather, but rather climate. Weather is what happens day by day, the daily fluctuations reported each morning by the affable television weather anchors. Climate is something much bigger and more far-reaching. It is also much more abstract, not something that will be experienced on a daily basis, but something that unfolds over decades or even a century.

How is it that something so complex—and indeed so abstract, something that is inferred rather than touched—could come to so dominate the future of energy and how people live, and become one of the main issues in the politics among nations ? That is the story that follows here.

It is striking to see how glaciers and their advance and retreat have been the constant, the leitmotiv, indeed, even central actors, in the study of climate change from the very beginning of the scientific investigations all the way up to the contemporary images of blocks of melting Antarctic ice tumbling into sea. Today glaciers serve as Cassandras for climate. But they are also living history—time machines that enable us to be in the present and yet, at the same moment, go back 20,000 years into the past.

A series of related puzzles converged in the late eighteenth and nineteenth centuries to provide the intellectual origins of thinking on climate change. One was the determinants of the earth’s temperature. Why, to put it simply, was life possible on earth? That is, why did the planet not become burningly hot when the sun shone and then freezingly cold at night? Another was the suspicion—and the fear—that the current era of moderate temperatures had been preceded by something different and more extreme, something that haunted thinking about mankind’s past: what came to be known as the Ice Age.

These puzzles led to two arresting questions: What could have made the climate change? And could glaciers return, like some immense, fearsome primordial beasts, crushing everything in their paths, smashing and obliterating human civilization as they advanced?

The story begins in the Swiss Alps and its glaciers, more than half a century before John Tyndall first laid eyes upon them.



THE ALPINE “HOT BOX”

Horace Bénédict de Saussure was a scientist, a professor at the Academy of Geneva. He was also an Alpinist, a mountain climber and explorer who devoted his life to trying to understand the natural world in Switzerland’s high peaks. To describe his vocation in his classic work, Voyages dans les Alpes, he invented the word “geology.” Saussure was fascinated by heat and altitude, and built devices to measure temperatures at the tops of mountains and the bottoms of lakes.4

But a question troubled Saussure as he traipsed through the Swiss mountains. Why, he asked, did not all the earth’s heat escape into space at night? To try to find an answer, he built in the 1770s what became known as his “hot box”—sort of mini greenhouse. The sides and bottom were covered with darkened cork. The top was glass. As heat and light flowed into the box, it was trapped, and the temperature inside would rise. Perhaps, he mused, the atmosphere did the same thing as the glass. Perhaps the atmosphere was a lid over the earth’s surface, a giant greenhouse, letting the light in but retaining some of the heat, keeping the earth warm even when the sun had disappeared from the sky.

The French mathematician Joseph Fourier—a friend of Napoléon’s and a sometime governor of Egypt—was fascinated by the experiments of Saussure, whom he admiringly described as “the celebrated voyager.” Fourier, who devoted much research to heat flows, was convinced that Saussure was right. The atmosphere, Fourier thought, had to function as some sort of top or lid, retaining heat. Otherwise, the earth’s temperature at night would be well below freezing.

But how to prove it? In the 1820s Fourier set out to do the mathematics. But the work was daunting and extremely inexact, and his inability to work out the calculations left him deeply frustrated. “It is difficult to know up to what point the atmosphere influences the average temperature of the globe,” he lamented, for he could find “no regular mathematical theory” to explain it. With that, he figuratively threw up his hands, leaving the problem to others.5

Over the decades, a few other scientists, harking back to Saussure and Fourier, and especially to Saussure’s hot box, began to speak about a “hot-house,” or “greenhouse,” effect as a metaphor to describe how the atmosphere traps heat. But how exactly did it work? And why?



“GREAT SHEETS OF ICE”

The Swiss scientist Louis Agassiz was also obsessed with glaciers—indeed so obsessed that he put aside his research on fossils of extinct fish in order to probe the workings of glaciers. He even built a hut on the Aar glacier and moved into it so that he might more closely monitor the glacier’s movement.

In 1837, more than a decade before John Tyndall first caught sight of a glacier, Agassiz propounded a revolutionary, even shocking idea. There had once been something before the present age, he declared. That “before” was an ice age, when much of Europe must have been covered by massive glaciers, “great sheets of ice resembling those now in Greenland.” That was an age, he said, when a “Siberian Winter” gripped the world throughout the year, a time when “death enveloped all nature in a shroud.”

The ice, Agassiz maintained, came about due to a sudden, mysterious drop in temperature that was part of a cyclical pattern stretching back to the beginning of earth’s history. As the glaciers had retreated to the north, they had left behind in their wake the valleys and mountains and gorges and lakes and fjords and boulders and gravel that documented their movement.

Agassiz’s bold assertion was met with great skepticism. One colleague advised him, for his own good, to give up on glaciers and instead stick to his “beloved fossil fishes.”

Agassiz would not be swayed. His continuing research provided further evidence on the movement of glaciers, or what he called “God’s great plough.” He later migrated to the United States, where he became a professor at Harvard University. He organized an expedition to the Great Lakes that demonstrated that they had been sculpted into the earth’s surface by the advance and retreat of glaciers—yet more evidence of an ice age. By proving that the earth had lived through different ages in terms of temperature, Agassiz was the real inventor of the idea of climate.6



THE ATMOSPHERE: “AS A DAM BUILT ACROSS A RIVER”

John Tyndall built his own research on the work of these predecessors. His keen interest in the migration of glaciers across Europe led him to seek to understand whether and how the atmosphere could trap heat. If he could make sense of that, he could begin to understand how the climate could change, a process that was embodied in the glaciers that obsessed him.

To find the answer, Tyndall built a new machine in his basement laboratory in the Royal Institution on Albemarle Street in London. This was his spectrophotometer, a device that enabled him to measure whether gases could trap heat and light. If the gases were transparent, they would not trap heat, and he would have to find some other explanation. He first experimented with the most plentiful atmospheric gases, nitrogen and oxygen. To his disappointment, they were transparent, and the light passed right through them.

What else could he test? The answer was right there in his laboratory—coal gas—otherwise known as town gas. This was a carbon-bearing gas, primarily methane made by heating coal, that was pumped into his laboratory by the local London lighting company to burn in order to provide illumination—pre-electricity. When Tyndall put the coal gas into the spectrophotometer, he found that the gas, though invisible to the eye, was opaque to light; it darkened. Here was his proof. It was trapping infrared light. He then tried water and carbon dioxide. They too were opaque. That meant that they too trapped heat.

By this point, Tyndall was close to collapse from continual ten-hour days in the laboratory and from his inhalation of fumes—of “gases not natural even to the atmosphere of London.” But that did not matter. He was elated. “Experimented all day,” he wrote in his journal on May 18, 1859, adding joyously, “The subject is completely in my hands!” Just three weeks later, he delivered a public lecture at the Royal Institution—with Prince Albert, the Prince Consort of Queen Victoria, in the chair—demonstrating and explaining his discovery and its significance. There on Albemarle Street, just off Piccadilly, was “the first public, experimentally based account” of the greenhouse effect.7

“As a dam built across a river causes a local deepening of the stream, so our atmosphere, thrown as a barrier across the terrestrial (infrared) rays, produces a local heightening of the temperature at the Earth’s surface,” said Tyndall. “Without the atmosphere, you would assuredly destroy every plant capable of being destroyed by a freezing temperature.... The atmosphere admits of the entrance of the solar heat, but checks its exit; the result is a tendency to accumulate heat at the surface of the planet.”

What Tyndall had done in his basement laboratory was to provide the explanation for the greenhouse effect, for how climate worked, and for how, in his words, “every variation” of the constituents of the atmosphere “must produce a change of climate.” He gave particular credit to Saussure and Fourier. Here also was a confirmation for Louis Agassiz’s theory of the Ice Age. For variations in the balance of gases in the atmosphere “may have produced all the mutations of climate which the researches of geologists reveal.”

Tyndall went on to make other important contributions to science and gained great renown. Until late in life, he would also regularly return to Switzerland to take in the glaciers and climb the peaks. After a life as a mountaineer, undertaking many dangerous and daring mountain expeditions, including a number of near fatal accidents, Tyndall died in 1893, at age 73, under more prosaic circumstances. His wife had accidentally administered an overdose of sleep nostrum to relieve his intolerable insomnia. As he slipped away, he murmured, “My poor darling, you have killed your John.”8



ARRHENIUS: THE GREAT BENEFIT OF A WARMING CLIMATE

The year after Tyndall’s death, in 1894, a Swedish chemist named Svante Arrhenius picked up the story. Arrhenius was curious as to what effects increasing or decreasing levels of carbon dioxide—or carbonic acid, as it was called at the time—would have on the climate. He too wanted to weigh in on the mechanisms of the ice ages, the advance and retreat of glaciers, and what he called “some points in geological climatology.”

Arrhenius’s own academic career was not smooth. He had difficulty getting his Ph.D. accepted at the University of Uppsala. But now, more established in Stockholm, he found his interest in carbon and the ice age stoked in a scientific seminar that met on Saturdays. Melancholic over his divorce and loss of custody of his son, and with much time on his hands, Arrehenius threw himself into month after month of tedious calculations, sometimes working 14 hours a day, proceeding latitude by latitude, trying by hand to calculate the effects of changes in carbon.

After a year, Arrhenius had the results. Invoking Tyndall and Fourier, he said, “A great deal has been written on the influence of the absorption of the atmosphere upon the climate.” His calculations showed that cutting atmospheric carbon in half would lower the world’s temperature by about four to five degrees centigrade. Additional work indicated that a doubling of carbon dioxide would increase temperatures by five to six degrees centigrade. Arrhenius did not have the benefit of supercomputers and advanced computation; he arrived at the above prediction after a tediously huge number of calculations by hand. Nonetheless, his results are in the range of contemporary models.9

Even if he was the first to predict, at least to some degree, global warming, Arrhenius was certainly not worried about the possibility. He thought it would take 3,000 years for CO2 to double in the atmosphere, and in any event that would be a good thing. He later mused that the increased CO2 concentrations would not only prevent a new ice age but would actively allow mankind to “enjoy ages with more equable and better climates,” especially in “the colder regions of the earth,” and that would “bring forth much more abundant crops than at present for the benefit of rapidly propagating mankind.” And that did not sound at all bad to a lonely Swedish chemist who knew all too well what it was like to live, year after year, through long, dark, cold winters.10

“My grandfather rang a bell, indeed, and people became extremely interested in it at that time,” said his grandson Gustaf Arrhenius, himself a distinguished chemist. “There was a great flurry of interest in it, but not because of the menace, but because it would be so great. He felt that it would be marvelous to have an improved climate in the ‘northern climes.’ And, in addition, the carbon dioxide would stimulate growth of crops—they would grow better. So he and the people at the time were only sad that in his calculations it would take [so long] to have the marked effect.”11

In time, however, attention drifted away from the subject of carbon and climate. Arrhenius himself turned to a number of other topics. In 1903 he was awarded the Nobel Prize in chemistry—not bad for someone whose Ph.D., which initiated the research for which he won the prize, was almost rejected.



In the decades that followed, the world became much more industrialized. Coal was king, both for electric generation and factories, which meant more “carbonic acid”—CO2—going into the air. But there was little attention to climate.

In the Depression years of the early 1930s, drought struck the American Midwest. Poor cultivation techniques had left the topsoil loose and exposed, and winds swept it up into great dust storms, sometimes so intense as to block out the sun, leaving the land barren. The economic devastation drove hundreds of thousands of farm families to pack their belongings on their Model Ts, and, like the fictional Joad family in John Steinbeck’s Grapes of Wrath, living in a “dust-blanketed land,” take to the roads and head to California as migrant refugees from the Dust Bowl. 12

But those droughts were “weather,” not “climate.” No one talked about climate for decades. Or almost no one.



THE EFFECT OF GUY CALLENDAR: CALCULATING CARBON

In 1938 an amateur meteorologist stood up to deliver a paper to the Royal Meteorological Society in London. Guy Stewart Callendar was not a professional scientist, but rather a steam engineer. The paper he was about to present would restate Arrhenius’s argument with new documentation. Callendar began by admitting that the CO2 theory had had a “chequered history.” But not for him. He was obsessed with carbon dioxide and its impact on climate; he spent all his spare time collecting and analyzing data on weather patterns and carbon emissions. Amateur though he was, he had more systematically and fully collected the data than anyone else. His work bore out Arrhenius. The results seemed to show that CO2 was indeed increasing in the atmosphere and that would lead to a change in the climate—more specifically, global warming. 13

While Callendar found this obsessively interesting, he, like Arrhenius, was hardly worried. He too thought this would make for a better, more pleasant world—“beneficial to mankind”—providing, among other things, a boon for agriculture. And there was a great bonus. “The return of the deadly glaciers should be delayed indefinitely.” 14

But Callendar was an amateur, and the professionals in attendance that night at the Royal Meteorological Society did not take him very seriously. After all, he was a steam engineer.

Yet what Callendar described—the role of CO2 in climate change—eventually became known as the Callendar Effect. “His claims rescued the idea of global warming from obscurity and thrust it into the marketplace of ideas,” wrote one historian. But it was only a temporary recovery. For over a number of years thereafter the idea was roundly dismissed. In 1951 a prominent climatologist observed that the CO2 theory of climate change “was never widely accepted and was abandoned.” No one seemed to take it very seriously.15


22

THE AGE OF DISCOVERY

Quite late in his life, Roger Revelle ruminated on his career in science.

“I’m not a very good scientist,” he said. But then he added, “I’ve got a lot of imagination.” One of the things that had captured his imagination, and held it for many decades, was carbon dioxide. And that preoccupation would turn out to be profoundly important not only for the understanding of climate, but also for the future of energy.

Revelle was, however, more than a little self-deprecating. For he had made the remark in conjunction with being awarded the National Science Medal, the country’s highest scientific honor, by President George H. W. Bush in 1990 in recognition of his far-reaching impact on science.

In addition to being a scientist, Revelle, a man of imposing stature and dominating personality, was also a naturalist, an explorer of the seas, an institution builder, and one of the inventors of the connection between basic research and government policy. He came equipped to his subjects with considerable curiosity abetted by what academic opponents derided as “impetuous enthusiasm and crusading spirit.”1

In presenting the award to Revelle, President George H. W. Bush singled out his “work in carbon dioxide and climate modification” as the first of his accomplishments, ahead of his other achievements in “oceanographic exploration presaging plate tectonics, the biological effects of radiation in the marine environment, and studies of human population growth and food supply.”

Revelle had launched his career with research expeditions into the unexplored deep waters of the Pacific. But, as it turned out, what he had set in motion in terms of research into carbon’s role in the atmosphere and man’s impact on that balance would also be of great—indeed, monumental—importance. And that grand scientific expedition, unfolding over decades, enlisting ever-greater computing power, traversing oceans and glaciers, mountaintops, the depths of the seas, and even outer space, is what put climate change and the heretofore unknown subject of global warming firmly on the political map.

Or, as Revelle put it, explaining the reasons he had received the National Science Medal, “I got it for being the grandfather of the greenhouse effect.”2

Revelle started off to be a geologist, but a fear of heights made him shy away from climbing up the sides of mountains, and he turned instead to the study of the depths of the oceans. He was one of the people who transformed oceanography from a game for wealthy amateurs into a major science. During World War II he was the U.S. Navy’s chief oceanographer. After the war he was one of the leaders in creating the Office of Naval Research, which supported much of the basic postwar scientific research in American universities—funding almost anything “that could, by the most extreme stretch of the imagination, serve national defense interests.” The Office of Naval Research, with Revelle’s prodding, was also the progenitor for what became the National Science Foundation. Revelle transformed Scripps Institution of Oceanography in La Jolla, California, north of San Diego, from a small research outpost, with one boat, into a formidable research institution, armed with a flotilla of ships that continually pushed out the frontiers of oceanic knowledge. He also made it into a “top carbon-cycle research center in the U.S.”3

Revelle organized and led historic expeditions after World War II that sailed for months and months into the then-unknown waters of the Mid- and South Pacific, exploring some of the deepest waters in the world. He recalled those expeditions as “one of the greatest periods of exploration of the earth . . . Every time you went to sea, you made unexpected discoveries. It was revolutionary. Nothing that we expected was true. Everything we didn’t expect was true.” At the time, most geological textbooks said that the deep-sea floor was a “flat and featureless plain.” Instead Revelle and his fellow explorers found deep trenches in the sea floor and identified the huge, heretofore unknown deep-sea Mid-Pacific Mountain Range. These discoveries were critical to the nowdominant plate tectonics theory of the movement of the continents and the earth’s surface. Revelle was the driving force in the establishment of the University of California at San Diego. At the same time, he helped build the cultural life of San Diego. For, he asked, how could first-rate academics be attracted to a city whose “best-known cultural attraction” was a zoo? He went on to help shape the field of population studies and worked on economic development in the third world.

Amid all of this he also launched the modern study of climate change.

What first caught Revelle’s interest in CO2 was something that he had learned as an undergraduate at Pomona College—that the oceans contained 60 times more CO2 than the atmosphere. His 1936 Ph.D. argued that the ocean absorbed most of the CO2 that came from people burning fuel. Accordingly, human activity that released carbon would have very little, if any effect at all, on climate because the ocean, as a giant sink, would capture most of it. That was the dominant view over the next several decades.4



“A LARGE-SCALE GEOPHYSICAL EXPERIMENT”

Over the years, Revelle had given some intermittent thought to the Callendar Effect—the argument made by Guy Callendar that increasing CO2 concentrations would raise the earth’s temperatures. His response, based upon his own research going back to his Ph.D., was that Callendar was probably wrong, that Callendar didn’t understand that the ocean would absorb CO2 from the atmosphere. But by the mid-1950s Revelle was beginning to change his mind. The reason emerged from his research on nuclear weapons tests in the Pacific.

After World War II, the Navy enlisted Revelle to help understand the oceanographic effects of those tests. Revelle’s assignment was to devise techniques to measure the waves and water pressure from the explosions. This would enable him to track radioactive diffusion through ocean currents. In the course of this work, Revelle’s team discovered “sharp, sudden” variations in water temperatures at different depths. This was the startling insight—the ocean worked differently from what they had thought. In Revelle’s words, the ocean was “a deck of cards.” Revelle concluded that “the ocean is stratified with a lid of warm water on the cold, and the mixing between them is limited.” That constrained the ability of the ocean to accept CO2.5 It was this period, in the mid-1950s, that Revelle, collaborating with a colleague, Hans Suess, wrote an article that captured this insight and would turn out to be a landmark in climate thinking.

The title made clear what the article was all about: “Carbon Dioxide Exchange Between Atmosphere and Ocean and the Question of an Increase in Atmospheric CO2 During the Past Decades.” Their paper invoked both Arrhenius and Callendar. Yet the article itself reflected ambiguity. Part of it suggested that the oceans would absorb most of the carbon, just as Revelle’s Ph.D. had argued, meaning that there would be no global warming triggered by carbon. Yet another paragraph suggested the opposite; that, while the ocean would absorb CO2, much of that was only on a temporary basis, owing to the chemistry of sea water, and the lack of interchange between warmer and cooler levels, and that the CO2 would seep back into the atmosphere. In other words, on a net basis, the ocean absorbed much less CO2 than expected. If not in the ocean, there was only one place for the carbon to go, and that was back into the atmosphere. That meant that atmospheric concentration of CO2 was destined, inevitably, to rise. The latter assertion was a late addition by Revelle, literally typed on a different kind of paper and then taped onto the original manuscript.

Before sending off the article, Revelle appended a further last-minute thought: The buildup of CO2 “may become significant during future decades if industrial fuel combustion continues to rise exponentially,” he wrote. “Human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future.” This last sentence would reverberate down through the years in ways that Revelle could not have imagined. Indeed, it would go on to achieve prophetic status—“quoted more than any other statement in the history of global warming.”6

Yet it was less a warning and more like a reflection. For Revelle was not worried. Like Svante Arrhenius who had tried 60 years earlier to quantify the effect of CO2 on the atmosphere, Revelle did not foresee that increased concentrations would be dangerous. Rather, it was a very interesting scientific question. “Roger wasn’t alarmed at all,” recalled one of his colleagues. “He liked great geophysical experiments. He thought that this would be a grand experiment . . . to study the effect on the ocean of the increase of carbon dioxide in the atmosphere and the mixing between the ocean reservoirs.” (Even a decade later, in 1966, Revelle was arguing that “our attitude” toward rising carbon dioxide in the atmosphere “brought about by our own actions should probably contain more curiosity than apprehension.”)7

At the time, Revelle was deeply involved in planning for an unprecedented global study of how the earth worked that might answer some of the climate questions. This was the IGY—the International Geophysical Year.8



THE UNEXPECTED IMPACT OF THE INTERNATIONAL GEOPHYSICAL YEAR

The International Geophysical Year (IGY) was born out of the idea of using the new technological capabilities stimulated in World War II and after—ranging from rockets and radar to the first computers—to explore heretofore inaccessible places where “metal loses its strength, rubber breaks, and diesel fluid becomes viscous like honey,” and thus generate much greater, deeper insight into how the earth worked and its interaction with the sun. It bloomed into a cross-disciplinary network of several thousand scientists from more than 70 countries. The earth’s processes—from its core and the seabed floor to the outer reaches of the atmosphere—would be mapped and measured in thousands of experiments coordinated on a global basis and conducted in a much more sophisticated and consistent way than ever before. Some of these experiments would involve Herculean physical feats of technology and endurance.9

The IGY was a sort of extended leap year, for it actually ran from July 1957 through December 1958, a period chosen to coincide with a fever point of solar activity. This global exploration brought forth an extraordinary body of new knowledge on everything from the flows of the deep waters of the oceans and the nature of the sea floor to the intense high-altitude radiation that girdles the earth. Glaciers constituted one of the major topics, continuing the fascination they held for scientists going back to Saussure and Tyndall.



“OKAY, LET’S GO”: THE STRATEGIC IMPORTANCE OF WEATHER

Then there was the weather. The IGY brought an unprecedented concentration of scientific talent to bear on better understanding weather. In addition to scientific curiosity there were also important strategic considerations. The Second World War had scarcely ended a decade earlier, and time again during that conflict, weather had proved of decisive importance on the battlefield. In western Russia, winter’s icy grip—what Russians called General Winter—decimated the Nazi armies as they besieged Leningrad and assaulted Stalingrad.

But nothing had so forcefully underlined the strategic importance of better comprehension of the weather than D-Day, the invasion of Normandy in June 1944. The “Longest Day,” as it was called, had been preceded by the “longest hours”—hours and hours of soul-wrenching stress, uncertainty, and fear in the headquarters along the southern coast of England, as indecisive hourly briefings followed indecisive hourly briefings, with the “go/no go” decision held hostage to a single factor: the weather.

“The weather in this country is practically unpredictable,” the commander in chief Dwight Eisenhower had complained while anxiously waiting for the next briefing. The forecasts were for very bad weather. How could 175,000 men be put at risk in such dreadful circumstances? At best, the reliability of the weather forecasts went out no more than two days; the stormy weather over the English Channel reduced the reliability to 12 hours. So uncertain was the weather that at the last moment the invasion scheduled for June 5 was postponed, and ships that had already set sail were called back just in time before the Germans could detect them.

Finally, on the morning of June 5, the chief meteorologist said, “I’ll give you some good news.” The forecasts indicated that a brief break of sorts in the weather was at hand. Eisenhower sat silently for 30 or 40 seconds, in his mind balancing success against failure and the risk of making a bad decision. Finally, he stood up and gave the order, “Okay, let’s go.” With that was launched into the barely marginal weather of June 6, 1944, the greatest armada in the history of the world. Fortunately, the German weather forecasters did not see the break and assured the German commander, Erwin Rommel, that he did not have to worry about an invasion.10

A decade later, knowing better than anyone else the strategic importance of improved weather knowledge, Eisenhower, now president, gave the “let’s go” order for the International Geophysical Year.

The IGY was designed to deepen knowledge not only about weather but also climate. As Roger Revelle wrote, among the “main objectives of the International Geophysical Year” was to gain a deeper understanding of climate change—what had triggered the coming and retreat of the Ice Age, that “dark age of snow and ice”—and the ability to predict future climate change.

Researchers did indeed discover and confirm some of the planet’s most important regulatory cycles that affected climate, including the impact of ocean and air currents in transmitting heat. But other elements also shaped the climactic system, including, some suspected, greenhouse gases. One of the organizers speculated that the earth might be “approaching a man-made warm period, simply because we are belching carbon dioxide into the air from our factories at a present rate of several billion tons a year!”11



THE MEETING AT WOODS HOLE

Roger Revelle, who headed the oceanography panel for the IGY, wanted to make sure the impact of carbon dioxide was, in his words, “adequately documented in the course of the IGY.” With that in mind, Revelle sat down with three other scientists at the Woods Hole Oceanographic Institution, in Massachusetts, to plot out a global research agenda for part of the IGY. Gustaf Arrhenius, the grandson of the Swedish Nobel Prize winner Svante Arrhenius, remembered this discussion at Woods Hole as “an historic event when we got together.” They decided that one of the objectives of the International Geophysical Year should be to actually measure what Arrhenius’s grandfather had tried to calculate more than half a century earlier—the impact of CO2 on the atmosphere.12

But was it possible to get decent readings of CO2? Someone at that Woods Hole meeting had heard about “a promising young man,” a researcher at the California Institute of Technology who was working on measuring CO2. Perhaps they could get him to Scripps.



KEELING AND HIS CURVE

The one thing that Charles David Keeling did not want to study was economics. His father was an economist, he had grown up in a household in which economics was a constant topic, and he would go to great lengths to avoid studying economics. At the University of Illinois he dropped his chemistry major because it had an economics requirement and ended up majoring in liberal arts. Still, he managed to get himself into the Ph.D. program in chemistry at Northwestern. While laboring away on his chemistry he came across a book, Glacial Geolog y and the Pleistocene Epoch, that had a major impact on him. “I imagined climbing mountains while measuring the physical properties of glaciers,” he recalled. As with John Tyndall, glaciers captivated him, and he spent a summer hiking and climbing in the “glacier-decked” Cascade Mountains of Washington State. He ended up supplementing his chemistry work with geology.13

For his postdoctoral work, Keeling wanted to find a way to combine his love of chemistry and geology. A new geochemistry program at the California Institute of Technology provided the answer. He would focus on carbon. Using a device he designed, Keeling stationed himself atop one of the Caltech buildings and got busy measuring CO2 in the air. But local pollution made the readings highly erratic. Seeking purer air, Keeling decamped for the wild sea-swept beauty of Big Sur, along the Northern California coast. He loved being in the outdoors, he said, even if, in order to take measurements, “I had to get out of a sleeping bag several times a night.”14

But Big Sur did not work either; CO2 levels in forests fluctuated through daily cycles. For a true reading on carbon dioxide levels, he needed to measure the levels with a stable “atmospheric background.” For that he needed funding.

It was just about that time that Revelle reached out to Keeling and offered him a place at Scripps, along with research money. Revelle recognized that there was a certain risk but thought Keeling’s obsessiveness was a clear plus. “He wants, in his belly, to measure carbon dioxide, to measure it every possible way, and to understand everything there is to know about carbon dioxide,” Revelle was later to say. “But that’s all he’s interested in. He’s never been interested in anything else.”

Keeling got to work, devoting all his scientific energies, as he put it, to “the pursuit of the carbon dioxide molecule in all its ramifications.” At that time it was all in the name of science. “There was no sense of peril then,” recalled Keeling. “Just a keen interest in gaining knowledge.”15

The Weather Bureau provided Keeling with the “where”—its new meteorological observatory in Hawaii, 11,135 feet up, near the top of the volcanic peak Mauna Loa. Here was the pure air, untroubled either by urban pollution or the daily cycles of forest vegetation, that would provide the stable atmospheric background Keeling needed. Another of his measuring devices was dispatched to the Little America station in Antarctica.

The cumulative results from the station atop Mauna Lao would prove something startling. In 1938 Guy Callendar may have been pooh-poohed by the professional meteorologists when he delivered his paper in London. But Keeling would prove him right. There really was a Callendar Effect. For, over the years, Keeling’s pioneering research established a clear trend: Atmospheric CO2 levels were increasing. In 1959 the average concentration was 316 parts per million. By 1970 it had risen to 325 parts per million, and by 1990 it would reach 354 parts. Fitted on a graph, this rising line became known as the Keeling Curve. Based upon the trend that Keeling had identified, the carbon dioxide in the atmosphere would double around the middle of the twenty-first century. But what could increasing carbon mean for climate?

The International Geophysical Year provided a kind of an answer, if at least by analogy. Until then the planet Venus had been the province of magazines like Astounding Science Fiction. But now scientists began to understand from the IGY study of Venus what the greenhouse effect could mean in its most extreme form. With higher concentrations of greenhouse gases in its atmosphere, the surface of Venus was hellishly hot, with temperatures as high as 870°F. Venus would eventually become a metaphor for climate change run amuck.16

Year after year, Keeling pursued his measurements, working doggedly with his small team, improving the accuracy, meticulous in details, building up the register of atmospheric carbon. Revelle was to look back on Keeling’s work as “one of the most beautiful and important sets of geochemical measurements ever made, a beautiful record.” At Scripps, Keeling was known for his obsessional interest in his subject. Once the chemist Gustaf Arrhenius was rushing his pregnant wife, who was going into labor, to the hospital. Keeling flagged the car down on the Scripps campus and launched into an intricate discussion of some challenge of carbon dioxide measurement. Finally, after his wife signaled that she was not going to be able to hang on much longer, Arrhenius interrupted. “I’m sorry,” he said. “We’re going to have a baby now.” He added, “In a few minutes.” At that point, Keeling finally realized what was going on and waved them off.17

Keeling’s work marked a great transition in climate science. Estimating carbon in the atmosphere was no longer a backward-looking matter aimed at explaining the mystery of the ice ages and the advance and retreat of glaciers in past millennia. It was instead becoming a subject about the future. By 1969 Keeling was confident enough to warn of risks from rising carbon. In 30 years, he said, “if present trends are any sign, mankind’s world, I judge, will be in greater immediate danger than it is today.”

As a result of Charles Keeling’s work on atmospheric carbon, the littleknown Callendar Effect gave way to the highly influential Keeling Curve. Keeling’s work became the foundation for the modern debate over climate change and for the current drive to transform the energy system. Indeed, Keeling’s Curve became “the central icon of the greenhouse effect”—its likeness engraved into the wall of the National Academy of Sciences in Washington, D.C.18



“GLOBAL COOLING” : THE NEXT ICE AGE?

During these years concern was rising about climate change, but for a variety of reasons. Some in the national security community worried about climate change as a strategic threat: they feared the Soviet Union would alter the climate, either intentionally for military advantage or accidentally, as a result of diverting rivers or such “hare-brained” ideas as the proposal to dam the Bering Straits.19

The implications of Keeling’s work on carbon were beginning to seep into the policy community. A 1965 report on “environmental pollution” from President Lyndon Johnson’s Science Advisory Committee included a 22-page appendix written by, among others, Revelle and Keeling. It reiterated the argument that “by burning fossil fuels humanity is unwittingly conducting a vast geophysical experiment” that almost certainly would change temperatures.

KEELING’S CURVE: ATMOSPHERIC CO2 LEVELS

Measured at Mauna Loa Observatory


The Quest

PREHISTORIC CO, LEVELS

Data from Antarctic ice cores


The Quest

Source: NOAA Earth System Research Laboratory, Carbon Dioxide Information Analysis Center


In 1969, picking up on this and other research, Nixon White House adviser (and later senator) Daniel Patrick Moynihan wrote a memo arguing that the new Nixon administration “really ought to get involved” with climate change as an issue. “This very clearly is a problem” and “one that can seize the imagination of persons normally indifferent to projects of apocalyptic change.” The research, he said, indicated that increasing CO2 in the atmosphere could raise the average temperature by seven degrees by 2000 and sea levels by ten feet. “Good-bye New York,” he said. “Good-bye Washington, for that matter.” He had one piece of good news, however: “We have no data on Seattle.”

Yet these early statements notwithstanding, at least as much of the discussion was about global cooling as about global warming. As the deputy director of the Office of Science and Technology wrote back to Moynihan, “The more I get into this, the more I find two classes of doom-sayers, with, of course, the silent majority in between. One group says we will turn into snow-tripping mastodons . . . and the other says we will have to grow gills to survive the increased ocean level due to the temperature rise from CO2.”20

Fears were growing that the glaciers would return, the same fears that had animated Louis Agassiz and other scientists a century earlier. Already, at the end of the 1950s, Betty Friedan—later famous for writing The Feminine Mystique—popularized these theories in an article on “The Coming Ice Age.” “If man finds no way to switch the glacial thermostat and avoid a new ice age,” she said, “there may well be a real estate boom in the Sahara.” By the early 1970s the CIA was investigating the geopolitical impact of global cooling, including the “megadeaths and social upheaval” that would ensue. In 1972 Science magazine reported that earth scientists meeting at Brown University had concluded that “the present cooling is especially demonstrable” and that “global cooling and related rapid changes of environment, substantially exceeding the fluctuations experienced by man in historical times must be expected.” Around the same time, a number of scientists who had participated in a Defense Department climate analysis wrote to President Nixon that the government needed to study the risk that a new glacial period was coming. Others warned that the increasing concentrations of aerosols in the atmosphere could be “sufficient to trigger an ice age.” The U.S. National Science Board reported a few years later that the last two or three decades had recorded a cooling trend. It was not a onesided argument by any means, as is clear from the pages of Science. In 1975 one scientist blasted the “complacency” of those who focused on the falling temperatures “over the past several decades,” which was leading them to “discount the warming effect of the CO2 produced by the burning of chemical fuels.”21

The increasing interest in climate change meant that money was beginning to flow into climate study. The reason was clear. “The propelling concern for climate research,” as two students of the era have observed, “was the possibility of climate cooling, rather than climate warming.”22

The same concerns were reflected in public discussion. “The central fact is that after three quarters of a century of extraordinarily mild conditions, the earth’s climate seems to be cooling down,” wrote Newsweek in 1975. While meteorologists argued about the “causes” and “extent,” they were “almost unanimous” in seeing a cooling trend that could lead to another “little ice age,” as between 1600 and 1900, or even another “great Ice Age.” In 1976 National Geographic gave equal weighting to the question as to whether the earth was “cooling off ” or warming “irreversibly.” The same year Time magazine was reporting, “Climatologists still disagree on whether earth’s long-range outlook is another Ice Age, which could bring mass starvation and fuel shortages, or a warming trend, which could melt the polar icecaps and flood coastal cities.”23

By the early 1980s, discussion about global cooling had taken a new form—the harsh “nuclear winter,” the extreme cooling that could be set off by a nuclear war between the United States and the Soviet Union. This would be the result of the vast smoke and dust clouds triggered by the atomic explosions, which would cut off sunlight and darken the earth, lead to “subfreezing temperatures” even in summer, and “pose a serious threat to human sur vivors.” The best-known proponent of the threat of nuclear winter was Carl Sagan, who as a young man had achieved fame among astronomers for identifying the extreme greenhouse atmosphere of Venus, and then went on to achieve much greater fame as host of the PBS television series Cosmos (and his much imitated refrain about “billions and billions of stars”).24

Notwithstanding the fear of nuclear winter, by the end of the 1970s and the early 1980s, a notable shift in the climate of climate change research was clear—from cooling to warming. Keeling’s Curve was beginning to flow into a larger realm of scientific research, ranging from direct observations in the air, on land, and on sea, to what would prove most crucial indeed: advances in modeling climate in computer simulations.



MODELING THE CLIMATE

Specifically, two technological advances were broadening the scientific base for understanding climate. One was satellites. The first U.S. weather satellite was launched in 1960, opening the doors not only to a much more holistic view of the earth but also to a much greater and continually growing flow of data. Initially this fueled work on a subject that gained some attention and government funding—“advertant” (that is, intentional) weather modification, aimed at such things as moderating storms and increasing rain in dry parts of the world. Already in 1961 President John F. Kennedy, addressing the United Nations, was calling for “cooperative efforts between all nations in weather prediction and eventually in weather control.” The topic of weather modification passed from the scene, but the contribution of satellites to vastly improved understanding of weather continued to grow.

The second advance was the invention of, and extraordinary development in, computing power, which in turn made possible the new discipline of climate modeling. The advent of the computer, in historical terms, owes much to a chance meeting on a railroad platform near the army’s Aberdeen Proving Ground in Maryland during World War II. A young mathematician caught sight of a world-famous figure—at least world famous in the worlds of science and mathematics. His name was John von Neumann. “With considerable temerity” the mathematician, Herman Goldfine, started a conversation. To Goldfine’s surprise, von Neumann, despite his towering reputation, was quite friendly. But when Goldfine told von Neumann that he was helping develop “an electronic computer capable of 333 multiplications per second,” the conversation abruptly changed “from one of relaxed good humor to one more like the oral examination for the doctor’s degree in mathematics.”25

John von Neumann—born János Neumann in Budapest—had emigrated to the United States in 1930 to become, along with Albert Einstein, one of the first faculty members at Princeton’s Institute for Advanced Study. Von Neumann would prove to be one of the most extraordinary and creative figures of the twentieth century, not only one of the century’s greatest mathematicians but also an outstanding physicist and, almost as a sideline, one of the most influential figures in modern economics (he invented game theory and is said to have “changed the very way economic analysis is done”). Not only that, he is often described as the “father of the computer” as well as the inventor of nuclear deterrence. (In 1956, near the end of his life, gathered around his bed in Walter Reed Hospital were the secretary of defense and his deputies, the secretaries of the army, navy, and air force, and all the joint chiefs of staff, all there for his “last words of advice and wisdom.”) He also fathered the modern mathematical analysis of climate modeling that became the basic tool for diagnosing global warming. He accomplished all this before he died in 1957, at the age of fifty-three.26

Von Neumann had an extraordinary ability to do complex calculations in his head at lightning speed. Once, as a six-year-old, he saw his mother staring off into space, daydreaming , and he asked her, “What are you calculating?” As an adult he let his subconscious work on mathematical problems in his sleep and woke up at 3:00 a.m. with the answer. At the same time, he had the ability to look at things in a wholly new manner. The mathematician Stanislaw Ulam emphasized how much analogies figured in von Neumann’s thought processes. One of his closest friends, Ulam would exchange both mathematical insights and intricate Yiddish jokes with him. Ulam would tease von Neumann for being too practical, for trying to apply mathematics to all sorts of problems. Once he told von Neumann, “When it comes to the application of mathematics to dentistry, maybe you’ll stop.”

The economist Paul Samuelson said von Neumann had “the fastest mind” he had ever encountered. The head of Britain’s National Physical Laboratory called him “the cleverest man in the world.” A peer summed up what many who worked with him thought: “Unquestionably the nearest thing to a genius I have ever encountered.”27

That chance meeting on the Aberdeen railroad platform in August 1944 would propel von Neumann to become the “father of computing.” Until then, computers were not machines but a job classification: “computers” were people who did the tiresome but essential calculations needed for surveying or for calculating the tides or the movements of heavenly bodies. But von Neumann had been questing after something like a mechanical computer in order to handle the immense computational challenge he and his colleagues had faced while working on the atomic bomb during World War II. At the secret Los Alamos, as they struggled to figure out how to transform the theoretical concept of a chain reaction into a fearsome weapon, they had “invented modern mathematical modeling.” But they needed the machines to make it practical.28

Immediately after the encounter on that station platform, von Neumann used his authority as a top-flight scientific adviser to the war effort to jump into this nascent and obscure computer project and promote its development. By June 1945 he had written a 101-page paper that became “the technological basis for the worldwide computer industry.” He started designing and building a new prototype computer in Princeton at the Institute for Advanced Study.

But to what to apply this new tool? Van Neumann identified “the first great scientific subject” for which he wanted to use this newly discovered computer power: “the phenomena of turbulence,” or, put more simply, forecasting the weather. He recognized the similarities between simulating atomic explosions and making weather predictions; both were nonlinear problems in fluid dynamics that needed vast amount of computation at breakneck speed.29

The complexity of the weather cried out for the rigorous mathematical analysis that von Neumann loved and that only the computer made possible. The strategic significance made it urgent. The intellectual challenge appealed to him. He feared that the Soviets might add weather modification to their arsenal and wage “climatological warfare” against the United States. He himself gave some favorable thought to using better knowledge of the weather to “jiggle the earth,” as he put it—that is, modify the weather and create a warmer semitropical climate around the world. Frankly, he thought, people would like that.

In seeking support for funding for the navy computing and climate studies, he argued that high-speed computing “would make weather predictions a week or more ahead practical.” He thereafter supervised the building of MANIAC—for Mathematical Analyzer, Numerical Integrator and Computer. The New York Times would call it a “giant electronic brain.”30

By 1948 the Numerical Meteorology Project was up and running. A new recruit, Jule Charney, a mathematician and meteorologist, took the lead in figuring out the mathematical formulas to conjoin climate modeling with the advances in computing. What they were trying to do was express the physical laws governing the dynamics of heat and moisture in the atmosphere in a series of mathematical algorithms that could be solved by a computer as they unfolded over time. By the early 1950s Charney and the group were producing its first computer simulations of climate. By the 1960s the Princeton initiative had morphed into the GFDL—Geophysical Fluid Dynamics Laboratory, now part of the National Oceanic and Atmospheric Administration—which became one of the leaders in developing climate-change models.31

Von Neumann’s quest to understand stratospheric circulation and atmospheric turbulence was giving rise to increasingly sophisticated simulations of how the global atmosphere worked—the patterns and flows by which the air moved around the world. These became known as general circulation models. They had to be global because the earth had only one atmosphere. The modelers were constantly striving to make their models more and more realistic, which meant more and more complex, in order to better understand how the world worked.

Climate modeling was very difficult, taxing, and definitely pioneering. “The computer was so feeble at the time,” recalled Syukuro Manabe, recruited to the GFDL from the meteorology faculty at Tokyo University and one of the most formidable of all the climate modelers. “If we put everything into the model at once, the computer couldn’t handle it. I was there and was watching the model blow up all the time.”

But already in 1967 Syukuro Manabe and Richard Wetherald, members of the Princeton lab, were hypothesizing, in what became a famous paper, that a doubling of CO2 would increase global temperatures by three to four degrees. They backed into the subject by accident. “I wanted to see how sensitive the model is to cloudiness, water vapor, ozone, and to CO2,” said Manabe. “So I was changing greenhouse gases, clouds . . . playing and enjoying myself. I realized that CO2 is important, as it turned out, I changed the right variable and hit the jackpot,” he continued. “At that time, no one cared about global warming... Some people thought maybe an ice age is coming.”

Notwithstanding his conviction that “probably this is the best paper I wrote in my whole career,” Manabe led further breakthroughs on modeling in the mid-1970s. Over the years data from satellites provided a benchmark against which to test the accuracy of the ever-more-complex models. And yet that 1967 hypothesis—that a doubling of CO2 would bring a three-to-fourdegree increase in the average global temperature—would become a constant in the debate over global warming. And a fuse.32



“BOY, IF THIS IS TRUE”: THE RISE OF CLIMATE ACTIVISM

The widening body of global-warming research started to connect with what would turn out to be the first generation of climate activists. For them, the focus was not scientific experiment but political action.

In 1973, on the Old Campus at Yale University, botanist George Woodwell delivered a global warming lecture. One of the people in the audience was an undergraduate named Fred Krupp. “Boy, if this is true,” Krupp remembers saying to himself, “we’re in a lot of trouble.” Krupp would become the president of the Environmental Defense Fund eleven years later, at age 30, and from there one of the foremost policy proponents for reducing carbon emissions .33

A few years later, in 1978, in Washington, D.C., Rafe Pomerance, president of the environmental group Friends of the Earth, was reading an environmental study when one sentence caught his eye: increasing coal use could warm the earth. “This can’t be true,” Pomerance thought. He started researching the subject, and he soon caught up with a scientist named Gordon MacDonald, who had been a member of Richard Nixon’s Council on Environmental Quality. After a two-hour discussion with MacDonald, Pomerance said, “If I set up briefings around town, will you do them?” MacDonald agreed, and they started making the rounds in Washington, D.C.

The president of the National Academy of Sciences, impressed by the briefing, set up a special task force under Jule Charney. Charney had moved from Princeton to MIT where, arguably, he had become America’s most prominent meteorologist. Issuing its report in 1979, the Charney Committee declared that the risk was very real. A few other influential studies came to similar conclusions, including one by the JASON committee, a panel of leading physicists and other scientists that advised the Department of Defense and other government agencies. It concluded that there was “incontrovertible evidence that the atmosphere is indeed changing and that we ourselves contribute to that change.” The scientists added that the ocean, “the great and ponderous flywheel of the global climate system,” was likely to slow observable climate change. The “JASONs,” as they were sometimes called, said that “a wait-and-see policy may mean waiting until it is too late.”34

The campaign “around town” led to highly attended Senate hearings in April 1980. The star of the hearing was Keeling’s Curve. After looking at a map presented by one witness that showed the East Coast of the United States inundated by rising sea waters, the committee chair, Senator Paul Tsongas from Massachusetts, commented with rising irony: “It means good-bye Miami, Corpus Christi . . . good-bye Boston, good-bye New Orleans, good-bye Charleston. . . . On the bright side, it means we can enjoy boating at the foot of the Capitol and fishing on the South Lawn.”35

One of the recipients of the MacDonald-Pomerance briefings was Gus Speth, chairman of the U.S. Council on Environmental Quality. Speth asked for a report short enough for policymakers. The authors were those at the forefront of global-warming study—Charles Keeling, Roger Revelle, George Woodwell, and Gordon MacDonald. They warned of “significant warming of world climates over the next decades unless mitigating steps are taken immediately.” In contrast to Arrhenius and Callendar, who had seen virtue in a warm climate, they were emphatic: “There appear to be very few clear advantages for man in such short-term alterations in climate.” They offered a four-point program : acknowledgment of the problem, energy conservation, reforestation—and lower carbon fuels. That last meant using natural gas instead of coal.36

Speth took the report to the White House and the Department of Energy. The reception was frosty. For at that moment the Carter administration—reeling from second oil shock, the Iranian Revolution, and natural gas shortages—was restricting natural gas use and promoting more coal.

Speth did not give up. He made the issue central to the 1981 annual report from the Council on Environmental Quality. But that was the end of the road, at least for the time being. For Jimmy Carter had already been defeated by Ronald Reagan in November 1980.37 But some environmental groups were beginning to take up climate as a core issue.

Under the Reagan administration, government money for climate research was reduced. No one knew this better than Charles Keeling. Though his funding was often precarious, the integrity of the carbon-monitoring project at Mauna Loa in Hawaii was preserved. Overall, though constrained, scientific research on climate did continue.

A key breakthrough in the science of climate change occurred in the 1980s with the recovery of ice cores, extracted from deep under the earth’s surface both in Greenland and at Vostok, the Russian research station in Antarctica that was so remote it could be resupplied only once a year. These ice cores were truly time machines. They provided crucial evidence to the theory of climate change. For the tiny air bubbles trapped in these cores preserved the atmosphere as it had been thousands of years ago, and could be dated through radiocarbon analysis. Painstaking study seemed to make one thing very clear: that carbon concentrations had been lower in the preindustrial age—275 to 280 parts per million compared with 325 parts in 1970 and 354 parts in 1990.38



REVELLE’S EXILE

When the new campus of the University of California was established in San Diego, Roger Revelle, the head of the Scripps Institution of Oceanography and the mentor of Charles Keeling, seemed the inevitable choice to be its first chancellor. He had been the new campus’s leading champion, and his heart was set on the chancellorship. But Revelle had powerful enemies, one of whom, a powerful regent of the university, blocked his appointment. It was probably the biggest disappointment of Revelle’s professional career. He did not want to stay around and instead decided to go into what one of his friends called “exile.”

This particular exile was hardly unpleasant, for he took up a professorship at Harvard, teaching a popular course—Natural Sciences 118: Human Populations and Natural Resources, otherwise known as Pops and Rocks .39

“By bringing fossil fuels to the surface and burning them, human beings are simply returning the carbon and oxygen to their original state,” he told students in the autumn of 1968. “Within a few short generations we are consuming materials that were formed and concentrated over geologic eras. There was probably never more CO2 in the air at any time in the past billion years than today.” Burning of fossil over the next few generations, he said, would add vast amounts of additional CO2 to the atmosphere. The results would likely be increases in temperature and “significant effect on the earth’s climate.”

Yet Revelle, thinking about the overall system, also spoke about he called the “complicating factors”—the possible offsets. Higher temperatures, for instance, would increase evaporation of water, and thus increase cloudiness “which in turn will reduce the amount of incoming solar energy, and tend to lower the temperature.”

His conclusion was similar to that of his 1957 paper: “We can think of the increase of atmospheric carbon dioxide as a gigantic, unintentional experiment being conducted by human beings all over the world, that may give us greater insight into the processes determining climate.”40

Revelle was a compelling teacher who presented a distinctly global view of environmental issues. Among those in Pops and Rocks was a student named Albert Gore Jr., the son of Senator Albert Gore of Tennessee. If Revelle’s impact on Keeling and research into carbon concentrations was to have decisive impact on the science of climate, then his lectures to the class, which included Al Gore, would also have a profound impact on the politics of climate. “A great teacher of mine at Harvard, Dr. Roger Revelle, opened my eyes to the problem of global warming,” Gore wrote much later. “The implications of his words were startling . . . Like all great teachers, he influenced the rest of my life.”

That was in the late 1960s. Two decades later, in the late 1980s, Gore and others in Congress were determined to make climate change into a political issue. As he and seven other senators put it in a letter in 1986, research on the impact of CO2 on climate change had left them “deeply disturbed.” They wanted not only more research. They wanted to see some true action.41


23

THE ROAD TO RIO

That particular day—June 23, 1988—was very much a Washington summer day, for it was not only hot, very hot—with the temperature getting up over 100 degrees—but also muggy, almost unbearably so. Moreover, it followed months of high temperatures, and half the counties in the United States were officially suffering from drought. “For the Midwest,” it was reported, “drought has become a way of life.” All this meant that the media would be intensely interested in anything to do with weather. In short, June 23 was a perfect day for a Senate hearing on global warming.

The hearings that ensued would mark the emergence of climate change as a political issue. The chairman that day was Senator Tim Wirth of Colorado. Half a year earlier, in January 1988, Wirth had ruminated with his aides about finding a very warm day for a climate-change hearing. What would likely be the hottest day of the year, he had asked. One of them had calculated that late June was a good bet. (To double-check, the aide had called an economist at Harvard, who, somewhat startled, said that he had no expertise on that subject, but, thinking quickly, helpfully recommended that the aide consult the Farmer’s Almanac.)1

Ever since, there has been a legend that the windows were left open the night before and the air-conditioning was turned off, to make certain that the hearing room would be sweltering. Wirth himself did later refer to some artful “stagecraft.” As it turned out, the room was sweltering, and sweat would glisten on the foreheads of the witnesses. Ensuring that the room would be very hot were the lights that went with two solid banks of television cameras. “Having a hearing is educational,” Wirth would say, quoting a political proverb. “Having a hearing with a television camera is useful; having a hearing with two rows of television cameras is heaven.” For the ethereal issue of climate change, that day counted as heavenly.2

“The scientific evidence is compelling ,” said Wirth, as he opened the hearings. “Now the Congress must begin to consider how we are going to slow or halt that trend.” The lineup of witnesses featured some of the strongest voices on climate change. But the most dramatic message came from the leadoff witness. Climate change was no longer an “academic” issue, said James Hansen, an atmospheric physicist and director of NASA’s Goddard Institute for Space Studies in New York City. A leading climate modeler, Hansen had already become prominent as one of the most apocalyptic in his predictions. And now, wiping the sweat from his forehead in the sweltering room made even hotter by the television lights, Hansen told the senators, the long-awaited “signal” on climate change was now here. Temperatures were indeed rising, just as his computer models had predicted. “We can ascribe with a high degree of confidence a cause-and-effect relationship between the greenhouse effect and observed warming ,” he said. Afterward he summarized his testimony to the New York Times more simply: “It is time to stop waffling.” The story about his testimony and the hearing ran on the Times’ front page.3

As another witness, Syukuro Manabe, one of the fathers of climate modeling, recalled, “They weren’t too impressed by this Japanese guy who had this accent; whereas Jim Hansen made a bombshell impression.”

The hearing “became a huge event,” said Wirth. “A lot of people had never seen anything like this before. It got an inordinate amount of attention for a Senate hearing.” One scientist summed up the impact this way: “I’ve never seen an environmental issue move so quickly, shifting from science to the policy realm almost overnight.”4


Wirth’s hearings demonstrated an increasing interaction between scientists and policymakers. That was accompanied by rapidly increasing cross-border research and network building on atmospheric subjects among scientists around the world. Roger Revelle, who had been there from the beginning of the modern effort, looked at the change with a certain wry amusement. “During the last ten years the literature on the greenhouse effect has proliferated beyond belief,” he noted in 1988. “What started out as a cottage industry with David Keeling as the principal worker has now become a major operation, with a cast of thousands.”5

The emergence of a global scientific network on climate change had already become clearly evident in 1985, three years before Wirth’s hearings, when a group of scientists met at Villach, in the Austrian Alps. Convinced by the range of evidence, from supercomputer models to what had been learned about the lower carbon levels in the ice ages, they thought that climate change was neither far off nor would it be beneficent. They also concluded that “understanding of the greenhouse question is sufficiently developed that scientists and policymakers should begin an active collaboration.” Their five-hundred-page report called for an international agreement to control carbon.6



THE HOLE IN THE OZONE: THE ROLE MODEL

In 1987 a conference convened in Montreal that was also aimed at an atmospheric threat. Out of it came a new international agreement that would have seemed unachievable only a few years earlier. It provided a powerful precedent for environmental collaboration on a global scale.

Greenhouse gases include not only carbon dioxide but also methane and nitrous oxide, as well as a group of man-made gases called chlorofluorocarbons (CFCs) that were first developed in the late 1920s. Though in much smaller concentrations in the atmosphere, chlorofluorocarbons are potent in trapping heat; indeed, it was estimated, ten thousand times more potent, molecule to molecule, than CO2. The use of CFCs had multiplied over the years, from propellants in aerosol cans to coolant in refrigerators.

In 1985 researchers from the British Antarctic Survey, using satellite data from NASA, saw something that stunned them: a “hole” was opening up in the ozone over Antarctica. The chlorofluorocarbons were eating at the ozone, literally thinning out and depleting the layer in the atmosphere.

The threat was immediate. Ozone absorbed what would otherwise be deadly concentrations of ultraviolet radiation. The loss of ozone threatened massive epidemics of skin cancer around the world as well as devastating effects on animal and plant life on earth. Such was the fear that in record time—by 1987—some twenty-four countries signed on to the Montreal Protocol, which would restrict chlorofluorocarbons.

The Montreal Protocol had a direct impact on the climate-change movement. It acknowledged that increasing concentrations of greenhouse gases were dangerous. It dramatically underlined the acceptance of the notion that human activity imposes damage on the earth’s atmosphere. And it demonstrated that countries could come together quickly and agreed to eliminate a common environmental threat. To climate activists, all of that seemed to be a dress rehearsal for what should happen with global warming. There was one striking difference, however. The relevant universe was so much smaller. Fewer than forty companies manufactured chlorofluorocarbons, and just two had half the market. But the whole world burned fossil fuels. Nevertheless, global warming, with all its complexity, was by the summer of 1988 entering the political arena. And a Montreal Protocol approach looked like the most likely template.7



JAMES HANSEN’S “VENUS SYNDROME”

Those hearings on that hot day in June 1988 turned James Hansen into a scientific celebrity and a figure who would have much impact on the climate debate thereafter.

To many in the political arena and the public, Hansen became the voice of science on climate, which created discomfort for other climate scientists who thought that he was too categoric. Science, the magazine of the American Association for the Advancement of Science, summed up the issue in an article titled “Hansen vs. the World on the Greenhouse Threat” by reporting “what bothers . . . his colleagues” is that he “fails to hedge his conclusions with appropriate qualifiers that reflect the imprecise science of climate modeling.”8

A few weeks after his hearing, Senator Tim Wirth wrote to Roger Revelle soliciting his views. The message he got back was quite different from what he had heard from Hansen and others in the hearing room. Indeed, it was a word of caution. “We must be careful not to arouse too much alarm until the rate and amount of warming becomes clearer,” said Revelle. “It is not yet obvious that this summer’s hot weather and drought are the result of a global climactic change or simply an example of the uncertainties of climate variability.” Revelle added, “My own view is that we had better wait another ten years before making confident predictions.” Revelle wrote to another congressman that it might actually be twenty years before humans understood the negative and positive implications of the greenhouse effect. He believed humans should “take whatever actions would be desirable whether or not the greenhouse effect materializes.” His list included a much larger role for nuclear power and launching a major program to expand forests, because the trees would capture and sequester what would otherwise be additional carbon in the air. “It is possible,” he said in his letter to Wirth, “that such expansion could reduce carbon dioxide emissions very drastically, to a quite safe level.”9

Hansen and Revelle came to the subject from different backgrounds and perspectives. Revelle started as a geologist, but Hansen had found his way into the climate studies via an interplanetary course through outer space. Hansen had written his physics Ph.D. on the atmosphere of Venus and was working on a Venus orbiter space vehicle shot in 1976 when a postgraduate student asked his help on calculating the atmospheric effects of some of the greenhouse gases. “I was captivated by this greenhouse problem,” Hansen later explained. He shifted his research to the earth’s atmosphere and to modeling it, although continuing his work on the other planets in the solar system.

Decades of science fiction writers had imagined life on earth’s nearest neighbors. But telescopic observation and unmanned space vehicles had established that the atmospheres of Mars or Venus ensured that life in any form that humans would recognize was most unlikely. Mars, with a very thin atmosphere, was freezingly cold. Venus, with an atmosphere super rich in CO2, was hellishly hot—almost 900°F on the surface. This space research informed understanding of the earth’s climate. “Clearly a great deal stands to be gained by simultaneous studies of the earth’s climate and the climate on other planets,” Hansen and colleagues had written in 1978. Indeed, he was to say decades later, the differences in Mars’s and Venus’s atmospheres “provided the best proof at the time of the reality of the greenhouse effect.” Venus came to play an even more direct role. It became, because of its CO2-soggy atmosphere and burningly hot temperatures on the ground, the metaphor for an irreversible “runaway greenhouse effect,” what Hansen would dub the “Venus Syndrome.” It would prove to be a metaphor of great—and persuasive—power.10



THE HOT SUMMER OF 1988 AND THE “WHITE HOUSE EFFECT”

Just a few days after the Wirth hearings, the World Conference on a Changing Atmosphere convened in Toronto. It was the first time that large numbers of scientists, policymakers, politicians, and activists had gotten together to discuss climate change, and they did so with great urgency and sense of mission. The conference called for the world community to adopt coordinated policies to dramatically reduce CO2 emissions.11

The hot weather led to much greater attention to the Toronto conference, as with the Wirth hearings, than would otherwise have been the case.

Although climate change was a longer-term phenomenon, the signal that James Hansen had identified seemed to reverberate over the rest of the summer of 1988 in an almost Biblical unfolding of weather-related plagues: intense heat waves, widespread droughts, impaired harvests, blazing forest fires in the West, navigation troubles on rivers as water levels fell. The electricity supply was balanced precariously, straining to meet the surging demand for air-conditioning.

All of this contributed to an increasingly pervasive anxiety that the environment was degrading.

That anxiety was captured in Boston Harbor on the first day of September. The Democratic governor of Massachusetts, Michael Dukakis, was well ahead in the polls against Vice President George H. W. Bush in the 1988 race to succeed Ronald Reagan. Dukakis was campaigning as an environmentalist, and Bush wanted to take him on in his home territory and on his core issues. So Bush boarded an excursion boat to cruise around Boston Harbor. Accompanied by a gaggle of reporters and cameras, he delighted in pointing out the vast amount of garbage floating in the harbor, which he attributed to the lapses of Dukakis’s governorship. (Dukakis would reply that the garbage was the fault of the Reagan administration for foot dragging on promised cleanup funds.) Presenting himself as a “Teddy Roosevelt Republican,” Bush promised to be an Environmental President. Among his pledges was the noteworthy statement that “those who think we are powerless to do anything about the ‘greenhouse effect’ are forgetting about ‘the White House effect.’” And the president added, “I intend to do something about it.” For the first time, a potential president had made greenhouse gases and climate change a campaign issue—and he had promised international collaboration to address it.12

The heat was headline news. But then heat waves and droughts had always been news. Time magazine, August 1923: “Another heat wave has struck Europe. So hot has it been in the Alps that the great glaciers have been melting and causing avalanches.” Time, June 1934: “Down upon a third of the U.S. poured a blistering sun . . . broiling, baking, burning... Not only was the Midwest as hot as the hinges of Hell. It was also tinder dry.” Time, June 1939: “It was so hot” in London “that ten extra waiters were engaged to serve cooling drinks to perspiring legislator in the House of Commons terrace restaurant . . . The asphalt on Berlin’s Via Triumphalis was so soft that no tanks or cars with caterpillar treads were allowed on the avenue.” Time, August 1955: “In the Eastern U.S., the dreadful summer of 1955 will be remembered for a long time to come . . . the region was withered by drought and a heat wave, the worst on record.”13

But now, from the late 1980s onward, when people wrote about heat waves and droughts, it was not only about their severity and the disruptions and distress they caused, but also about links to carbon dioxide and climate change, and as alarm bells for global warming. In the months that followed, major stories on global warming ran in Time and Newsweek, the major business magazines, and even in Sports Illustrated, whose story was headlined “A Climate for Death.” Global warming had at last found a place in the national consciousness.

Yet as the hot summer of 1988 faded, so did the sense of urgency. Just a couple of days after Bush’s harbor cruise, a science writer at the New York Times sought to sum up the hot summer of 1988. James Hansen’s “signal,” the writer concluded, was not so crystal clear as it might have sounded in the hearing room on June 23. The heat-wave summer of 1988 had turned out to be not the hottest, but only the eleventh hottest in the 58 years that records had been kept. The worst drought was not 1988, but in the Dust Bowl days of 1934, when the upper Midwest was dubbed “the new U.S. Sahara.” The reporter quoted a climate scientist who said, “In the short term, I don’t see any major climate shift in the offing, and I don’t feel we should be packing our bags to move to Manitoba just yet.” When climate change was raised that same month at the U.N. General Assembly, one delegate said that it “still seemed like science fiction to many people.”14



MRS. THATCHER

But one more important, and perhaps surprising, voice on climate change was still to be heard that September. It was that of the first leader of a major industrial nation to deliver a policy address focused on the subject—Britain’s Conservative prime minister Margaret Thatcher. She was quite taken with the subject, for she was a scientist as a well as politician. With an Oxford degree in chemistry, she had worked for a few years as a research chemist for the J. Lyons food company until deciding that she was more interested in the art of politics than the molecular workings of glycerides monolayers—otherwise known as cake frostings. But her scientific training provided a framework for her to grasp quickly the issues surrounding climate change.

There was also a political element. Two years earlier she had been locked in a battle to the death with the left-wing coal miners’ union, which had sought to cut off the delivery of coal, thus disrupting the nation’s electricity supply and shutting down the country. That struggle was one of the defining moments in her 12 years as prime minister, and her victory broke the stalemate in industrial relations that had been driving Britain into chronic paralysis and economic decline. Replacing coal in electric generation with less-carbon-intensive natural gas from the North Sea would ensure that the coal miners’ union would never again be strong enough to put a hammer lock on the nation’s energy supply and bring its economy to a standstill.15

On September 27, 1988, Thatcher delivered an address to the Royal Society in Fishmongers’ Hall in London in which climate change figured large. Thatcher had assumed that her speech, sounding the tocsin about climate change, would generate much attention. In practical terms, she had counted on that interest to ensure the presence of a bevy of television cameras, so that their bright lights could provide the illumination she needed to read her speech amid the pervasive gloom of the Fishmongers’ Hall. But, to her disappointment, there was little media interest and, to her horror, no television cameras—not a single one. In fact, it was so dark that she was unable to read her speech at all—until, finally, a candelabra was passed up the table.

“For generations, we have assumed that the efforts of mankind would leave the fundamental equilibrium of the world’s systems and atmosphere stable,” she said when finally able to begin her speech. “But it is possible that with all these enormous changes (population, agriculture, use of fossil fuels) concentrated into such a short period of time, we have unwittingly begun a massive experiment with the systems of this planet itself.” Although one could not yet be certain, she warned, “we have no laboratory in which to carry out controlled experiments.” As not enough was yet known to make decisions, intensive programs of research and a good deal of “good science” were needed. As good as her word, she upped the British government’s spending on climate research.

But the absence of television cameras certainly indicated that climate change was not yet an issue that would light up the public’s imagination. 16



THE IPCC AND THE “INDISPENSABLE MAN”

But before the year was out, and far from the glare of public attention, the decisive step would be taken that would frame how the world sees climate change today. In November 1988 a group of scientists met in Geneva to inaugurate the IPCC, the Intergovernmental Panel on Climate Change. This launch might have been lost in the alphabet soup of international agencies, conferences, and programs, but over the course of the next two decades, it would rise out of obscurity to shape the international discourse on this issue. The IPCC drew its legitimacy from two international organizations, the World Meteorological Organization and the United Nations Development Program. But the IPCC itself was not an organization in any familiar sense. Rather it was a self-regulating, selfgoverning organism, a coordinated network of research scientists who worked across borders, facilitated by cheaper and better communications.

There was certainly a “coordinator in chief ”—a Swedish meteorologist named Bert Bolin. If one man was at the center of the growing international climate work, and would be there for almost half a century, year in and year out, it was Bolin—the “indispensable man” of climate research. Bolin was convener, keynoter, conference chair, editor, writer, adjudicator, balancer, scientific statesma