April 2, 2011

Regulator Says Radioactive Water Leaking Into Ocean From Japanese Nuclear Plant

By KEN BELSON and HIROKO TABUCHI

TOKYO — Highly radioactive water is leaking directly into the sea from a damaged pit near a crippled reactor at the Fukushima Daiichi nuclear power plant, safety officials said Saturday, the latest setback in the increasingly messy bid to regain control of the reactors.
Although higher levels of radiation have been detected in the ocean waters near the plant, the breach discovered Saturday is the first identified direct leak of such high levels of radiation into the sea.
The leak, found at a maintenance pit near the plant’s No. 2 reactor, is a fresh reminder of the dangerous consequences of the strategy to cool the reactors and spent fuel storage pools by pumping hundreds of tons of water a day into them. While much of that water has evaporated, a significant portion has also turned into runoff.
Three workers at the plant, operated by Tokyo Electric Power Company, have been injured by stepping into pools of contaminated water inside one reactor complex, while above-normal levels of radiation have been detected in seawater near the plant.
Workers are racing to drain the pools but have struggled to figure out how to store the irradiated water. On Saturday, contaminated water was transferred into a barge to free up space in other tanks on land. A second barge also arrived.
But with so much contaminated water injuring workers and escaping into the ocean, some experts in the nuclear industry are now starting to question the so-called “feed-and-bleed” strategy of pumping the reactors with water. “The more water they add, the more problems they are generating,” said Satoshi Sato, a consultant to the nuclear energy industry and a former engineer with General Electric. “It’s just a matter of time before the leaks into the ocean grow.”  Tokyo Electric said that it had not identified the original source of the contaminated water. Some experts say it could be from excess runoff from the spent fuel pools or a broken pipe or valve connected to the reactor.
The leaks could also be evidence that the reactor pressure vessel, which holds the nuclear fuel rods, is unable to hold all of the water being poured into it, Mr. Sato said.
Tetsuo Iguchi, a professor in the department of quantum engineering at Nagoya University, said that the leak discovered Saturday raised fears that the contaminated water may be seeping out through many more undiscovered sources. He said unless workers could quickly stop the leaking, Tokyo Electric could be forced to re-evaluate the feed-and-bleed strategy.
“It is crucial to keep cooling the fuel rods, but on the other hand, these leaks are dangerous,” Mr. Iguchi said. “They can’t let the plant keep leaking high amounts of radiation for much longer,” he said.
Plant workers discovered a crack about eight inches wide in the maintenance pit, which lies between the No. 2 Reactor and the sea and holds cables used to power seawater pumps, Japan’s nuclear regulator said.
The space directly above the water leaking into the sea had a radiation reading of more than 1,000 millisieverts an hour, said Hidehiko Nishiyama, deputy director-general of the Nuclear and Industrial Safety Agency. Tests of the water within the pit later showed the presence of 1 million becquerels per liter of iodine 131, a hazardous radioactive substance. However iodine 131 has a relatively short half life of about eight days.
Mr. Nishiyama also said that above-normal levels of radioactive materials were detected about 25 miles south of the Fukushima plant, much further than had previously been reported.
The pit was filled with four to eight inches of contaminated water, said the operator of the plant, Tokyo Electric. Highly radioactive water has also been discovered in the reactor’s turbine building in the past week.
Workers had started to try to fill the crack with concrete, Mr. Nishiyama said late Saturday. 
Tetsuo Iguchi, a professor in the department of quantum engineering at Nagoya University, said that the leak discovered Saturday raised fears that the contaminated water may be seeping out through many more undiscovered routes. “It is crucial to keep cooling the fuel rods, but on the other hand, these leaks are dangerous,” Mr. Iguchi said. “They can’t let the plant keep leaking high amounts of radiation for much longer,” he said.  Workers will try to patch up the crack with concrete, the company said.
Saturday’s announcement of a leak came a day after the U.S. Energy Secretary Steven Chu said Reactor No. 2 at the Fukushima plant had suffered a 33 percent meltdown. He cautioned that the figures were “more of a calculation.”Mr. Chu also said that roughly 70 percent of the core of Reactor No. 1 had suffered severe damage.
The crisis at the nuclear plant has overshadowed the recovery effort under way in Japan since the 9.0 magnitude quake and tsunami hit the northeastern coast on March 11. The country’s National Police Agency said the official death toll from the disaster had surpassed 11,800, while more than 15,500 were listed as missing.
Earlier Saturday, Prime Minister Naoto Kan made his first visit to the region since last month’s disaster, where he promised to do everything possible to help. His tour came a day after asking Japan to start focusing on the long hard task of rebuilding the tsunami-shattered prefectures.
“We’ll be together with you to the very end,” Mr. Kan said during a stop in Rikuzentakata, a town of about 20,000 people that was destroyed on March 11. “Everybody, try your best.”
Dressed in a blue work jacket, Kan also visited with refugees stranded in an elementary school and then visited a sports complex about 20 miles south of the disabled nuclear plant. The training facility has been turned into a staging area for firefighters, Self-Defense Forces and workers from Tokyo Electric.

A record-making effort


Climate change


Mar 31st 2011, 23:20 by O.M.
ON THURSDAY March 31st Richard Muller of Lawrence Berkeley Laboratory gave evidence to the energy and commerce committee of America’s House of Representatives on the surface temperature record. Without having yet bothered to check, Babbage can say with some certainty that this event will be much discussed in the blogosphere—as, oddly enough, it should be. 
Here’s the short version of the reason why: a new and methodologically interesting study, carried out by people some of whom might have been expected to take a somewhat sceptical view on the issue, seems essentially to have confirmed the results of earlier work on the rate at which the earth’s temperature is rising. This makes suggestions that this rise is an artefact of bad measurement, or indeed a conspiracy of climatologists, even less credible than they were before.
Now here’s the much longer version.
There are two topics which, more than any other, can be guaranteed to set off arguments between those convinced of the reality and importance of humanity’s impact on the climate and those not so convinced. One revolves around the question of how reliable, if at all, statements about average global temperatures before about 1500 AD are. This is the so-called “hockey stick” debate. The amount of computer processing power and data storage capacity devoted to endless online discussions of the hockey stick— the subject featured in a great deal of the brouhaha over the “climategate” e-mails—must, by now, have the carbon footprint of a fair-sized Canadian city, which of course would worry one side of the argument not a whit.
The second touchy topic is the instrumental record of the world’s temperature over the past 100 years or so. This is a more genuinely interesting subject, for two reasons. First: Consider a person who looks at all the non-hockey-stick evidence and arguments for thinking people are changing the climate (we won’t rehearse them now, but here’s a relevant articlefrom The Economist last year). Imagine this person then saying “you know, that radiation balance and basic physics and ocean heat content and all the rest of that stuff looks pretty conclusive—but because I can’t say for sure whether it was warmer in 1388 than it was in 1988 or the other way round I’m going to ignore it all.” This would probably not be a person you would take very seriously. 
(It is because of this that the wiser sceptical voices in the hockey-stick debate do not claim that uncertainties over what, if anything, can reliably be said about mediaeval temperatures invalidate the scientific case for a strong and worrying human influence on climate. They say instead that there are a variety of statistical and other flaws in some of the reconstructions of mediaeval temperature, that some of the scientists responsible for some of these reconstructions have not behaved well, and that if that is typical of climate science then climate science in a whole is in a bad way. Thus the hockey stick becomes a sort of meta-, and indeed metastasising, argument.)
If, on the other hand, you imagine a person who has looked at all the other relevant material going on to say “You know, this is all very well—but there doesn’t seem to be any conclusive evidence that the world has actually been getting warmer in a significant or surprising way over the past decades,” you might well think hmm; if that’s the case then he has a point. Evidence that the world really is warming does seem pretty apposite to the whole issue. Being able to trust the records of what thermometers spread out over the world have actually measured and the procedures by which  those records are combined into a series of average global temperatures matter rather more than the hockey-stick. 
Another reason is that mediaeval data (from tree rings and the like) at issue in the hockey stick debate are necessarily sparse and patchy, and coming up with really robust answers to all the relevant questions on the basis of them may well prove impossible. The data on which the contemporary surface temperature record is based, though, is rich. There are a great many temperature records in archives around the world. If you can choose records that are demonstrably reliable and combine them in an appropriate way, you should be able to get a pretty solid answer. This thus seems like an argument that could conceivably end in agreement on an important issue. 
Indeed, most climate scientists would say that it already has. There are three different combinations of instrumental temperature records that seek to show average surface temperatures back to 1900 or earlier. Two are American, with one produced by the National Oceanic and Atmospheric Administration (NOAA) and one by NASA; the other one is British, with data from the Met Office and the University of East Anglia’s Climatic Research Unit (which was the epicentre of climategate). They used many of the same raw data, but the ways in which they adjust them to remove presumed artefacts and then combine them differ. Yet they come up with very similar answers, and when they publish their figures with error estimates they come within each others’ margins of error. The fact that three different groups agree in this way would normally seem to justify relying on the result.
But there are many ways in which climate science is not normal, one of which is that it matters a great deal with respect to some very expensive policy decisions. Various criticisms of the methodology and probity of the temperature records have been made, though much more often in the blogosphere than in the scientific literature. Erring on the side of extra caution is not a bad idea, and various efforts are underway to develop, corroborate and better to underpin the work on temperature records that has been done to date. One such effort is the Berkeley Earth Surface Temperature programme, which Dr Muller heads. 

Fearless physicists
Dr Muller is an astrophysicist, not a climate scientist, and was indeed seen by some as being a bit of a sceptic, in the unfortunate negative usage of the word. He is strongly spoken in his criticism of some of the behaviour revealed in the climategate e-mails, and talks admiringly of some of the amateur or non-credentialed scientists who have mounted critiques of published climate science. 
He also has a sort of intellectual fearlessness most often seen in physicists; when applied to other fields of endeavour this can look uncannily like a form of arrogance, perhaps because that is often what it is. The initials of the Berkeley Earth Surface Temperature project could be read in this light, and indeed they were so interpreted by some climate scientists, who got rather bit peeved at the idea that interlopers should presume to claim a priori they were the BEST. Any arrogance they may be prone to, though, doesn’t invalidate the fearless physicists’ insights. Dr Muller’s beloved mentor, Luis Alvarez, was quite right when he and his son argued that the death of the dinosaurs had less to do with the environmental or evolutionary challenges palaeontologists concentrated on and more to do with the damn great meteorite or comet impact for which Alvarez père et fils had just found dramatic and unexpected evidence. On the other hand Dr Muller’s subsequent variation on the Alvarez's now broadly accepted contribution, which led to an unsuccessful search for a distant planet that might be directing killer comets at the earth on a regular basis, has as yet come to naught.
The Berkeley approach seems based on the idea that coming out of physics, not climate science, was going to be a strength not a weakness. Rather than look at carefully (and similarly) selected subsets of the data it would look at everything available, just as astrophysicists frequently seek to survey the whole sky. Rather than using the judgement of climate scientists to make sense of the data records and what needed to be done to them, it would use well designed computer algorithms. Put together under the aegis of Novim, a non-profit group that runs environmental studies, the team gathered up a bit over half a million dollars—including $100,000 from a fund set up by Bill Gates and $150,000 from the Koch foundation, whose animosity towards action on climate change made the Berkeley project look yet more suspicious to some climate-change activists—and got to work. There was also support from the Department of Energy’s Lawrence Berkeley Lab, where Dr Muller and some of his team work. It is probably fair to assume that Steve Koonin, an undersecretary of state at the energy department with whom Dr Muller has served as one of the “Jasons”, a group of particularly intellectually fearless scientists which provides blue-sky and sometimes far-out advice to the defence department, and who has also produced a report for Novim, had an unofficial eye on what was going on. 
Dr Muller’s testimony was not exactly the unveiling of his team’s first results—you can find him saying much the same in a seminaron the web— but it was a particularly high-level early outing. It was also a strikingly robust defence of the record as others have interpreted it. Calling the three extant groups “excellent”, Dr Muller described preliminary work by the Berkeley team on the overall magnitude and course of recent warming that backed them up. Instead of picking a relatively small number of weather stations to look at, this work simply took 2% of all the records the Berkeley group has access to at random. The results look very like what the other three teams have seen. The Berkeley team says that it has run such 2% experiments a number of times now, and the results are robust. The earth has warmed by about 0.7°C since 1957, just as the other teams claimed. Adjustments made to the data on a site-by-site basis which have had some suspicious sceptics hopping mad seem to have made no appreciable difference.

The Watts and wherefores
Dr Muller also, more controversially, reported on results that pertain to a specific point made by climate sceptics; that the temperature record is contaminated because many of the stations used to compile it are in inappropriately located. This idea is particularly associated with Anthony Watts, a former television weatherman who runs an extremely popular websitecatering largely to a climate-sceptic crowd. Mr Watts has led an impressive crowdsourcing movement devoted to checking out the meteorological stations that generate climate data in America. This has found that a really surprising number of the instruments concerned are not sited in the way that they should be, being inappropriately close to buildings, tarmac and other things that could cause problems.
A compendium of Mr Watts’s concerns was published early last year by the Science and Public Policy Insitute, which specialises in airing doubts about climate science and policy, under the title “Surface Temperature Records: Policy Driven Deception?” Dr Muller’s answer to that question in front of Congress was pretty clearly no. The Berkeley team compared the data from the American sites Mr Watts thought were worst situated and the sites he thought best. It found no statistically significant difference in the trends measured in the two different categories, though the warming trend in the better sites is slightly stronger. 
This analysis echoes one carried out last year by scientists at NOAA, which when looking at a subset of Mr Watts’s data found much the same thing. The Berkeley team’s result, though, is perhaps more striking, in that Mr Watts had made all his data available to Mr Muller and his colleagues, a step he seems now rather to regret. 
Impressed by the Berkeley set up, Mr Watts wrote in a postpublished March 6th:
I’m prepared to accept whatever result they produce, even if it proves my premise wrong. I’m taking this bold step because the method has promise. So let’s not pay attention to the little yippers who want to tear it down before they even see the results. I haven’t seen the global result, nobody has, not even the home team, but the method isn’t the madness that we’ve seen from NOAA, NCDC, GISS, and CRU, and, there aren’t any monetary strings attached to the result that I can tell. If the project was terminated tomorrow, nobody loses jobs, no large government programs get shut down, and no dependent programs crash either. That lack of strings attached to funding, plus the broad mix of people involved especially those who have previous experience in handling large data sets gives me greater confidence in the result being closer to a bona fide ground truth than anything we’ve seen yet.
Responding to Dr Muller’s testimony, Mr Watts e-mailed that when he shared his data he was “was expecting a study done by peer review, months out, not a job rushed in three weeks for political theater in the House of Representatives.” Though some of his results and conclusions were published in “Policy Driven Deception?” he has since worked on a peer-reviewed publication, sensible to the accusation often levelled at his blog that such publication is the way to do proper science. That this is not what the Berkeley team did, choosing instead to present preliminary results to Congress, has upset him, and he has asked to have a statement of his ownread into the congressional record. The Berkeley team, for its part, argues that while it would rather publish in a peer-reviewed journal before discussing its results, and has “begun the submission process to do this”, an invitation to address the committee deserved what the team saw as the best available answers. 
Mr Watts’s paper, on which he has a number of co-authors including climate scientist Roger Pielke Sr, is now said to be close to the end of its road towards publication. It claims to find that there is indeed a difference between the good and bad sites: though they may provide indistinguishable results for trends in average temperature, they differ when you look at trends in minimum and maximum temperatures, which has implications for the diurnal temperature range. It is not clear what the climatological significance of this might be, and it is always worth bearing in mind that these data only apply to stations in America, which is a fairly small part of the planet. 
Overall, the takeaway from Dr Muller’s presentation of his team’s data is that, in the words of one climate scientist, a “Koch-brothers-funded study confirms the previous temperature reconstructions.” Dr Muller says the team will now be looking into a number of other effects, including the bias that the “urban heat island” effect—cities are warmer than surrounding countryside—might have. The question of good versus bad location is linked to this (a good site can become bad as a city sprawls over it) and so is the issue of which records you choose to use (long records, preferred by earlier reconstructions, may be more prone to changing urbanisation around them). But there is more to the problem, and Dr Muller hopes to look into it further, as well as into issues that might arise from the times of day at which observations are made, stations moving from one place to another, and changes in the instrumentation used. 
The Berkeley work, especially after it is published and disseminated in full, may increase the acceptance of the reality of global warming among people who have so far managed to maintain a comforting and sometimes self-serving feeling that maybe the people who deny that anything is going on are actually right. It doesn’t in itself show how much of the warming is due to human activity. Dr Muller, in a somewhat cavalier way, chose to suggest that about half of what had been seen since 1900 was. Other scientists would put the proportion higher. 
Nor does it say how much warming is yet to come. Carbon dioxide and other widespread gases can warm the earth, but dust, smog, sulphate particles and other things can cool it. There is no very reliable record of how these cooling factors changed over the 20th century, but they must have played a role. So you can’t simply look at temperature changes over the 20th century, and scale them up according to the amount of carbon dioxide you expect, to find out what will happen next. Though Dr Muller, in his testimony, seems to differ on this—as might perhaps be expected for someone proud of a new contribution to the field, and hoping to make more—most climate scientists do not believe that further improvements in the accuracy of the average temperature record over the twentieth century will add greatly to their ability to predict the magnitude or timing of the changes to come.
But broader agreement that the temperature has indeed risen quite steeply over the past century is nevertheless a thing worth having. If the Berkeley team can help provide it, that is all to the good. 

Can We Do Without the Mideast?

By CLIFFORD KRAUSS

IMAGINE a foreign policy version of the movie “Groundhog Day,” with Bill Murray playing the president of the United States. The alarm clock rings. Political mayhem is again shaking the Middle East, crude oil and gasoline prices are climbing, and an economic recovery is under threat.
President Nixon woke up to the same alarm during the 1973-74 Arab oil embargo and declared Project Independence to end the country’s dependency on imported oil. President Carter, during the Iranian revolution, called an effort to reduce dependency on foreign oil “the moral equivalent of war.” President George W. Bush called oil an addiction.
On Wednesday, in a nationally televised address, President Obama said, “We cannot keep going from shock when gas prices go up to trance when gas prices go back down. We can’t rush to propose action when prices are high, then push the snooze button when they go down again.”
So, with Libyan and other North African and Middle Eastern oil fields jeopardized by political upheaval and Japan’s nuclear power disaster turning the energy world on its head, the alarm is ringing again. As gasoline prices rise and even the stability of Saudi Arabia is suddenly in question, energy independence is taking on new urgency.
The path to that independence — or at least an end to dependence on the Mideast — could well be dirty, expensive and politically explosive.
It would require transformational technology for electric cars and biofuels. But experts say that, at least in theory, it is a goal that is now in sight, especially with Canada and other friendly hemispheric partners now able to replace much of the oil imported from less friendly or stable producers.
“For the first time since the first oil shock, I see us decreasing our dependency on imported oil,” Steven Chu, the energy secretary, said in an interview. Noting that the country now imports half its oil, he added: “Can we be at half that in 20 years? Yeah, there is a real possibility of that.”
But the change will not come easily.
“Reducing America’s liquid fuel imports by midcentury to under 20 percent is possible, but it depends on what the political system is willing to do, what the public is willing to accept in terms of higher prices and how the technology breaks for you,” said John M. Deutch, a former director of the Central Intelligence Agency and a former under secretary of energy. “The first thing you do is pop the price of gasoline up quite a bit, and that reduces consumption.”
Some analysts said that even faster change was possible, but that could mean big government subsidies and taxes to build more high-speed rail lines, to encourage truck and bus fleets to switch to natural gas and to push the development of wind, solar and geothermal energy. The storage capacity of car batteries would have to improve, and plentiful, low-cost biofuels from plant waste or algae would have to come to market. In the short term, more domestic drilling, including offshore and perhaps in the Alaskan Arctic, would be needed, whatever environmental damage it might bring. Higher gasoline prices are likely, from market forces or taxes or both.
In other words, to reduce oil imports by nearly half to 1982 levels — they amounted to 28 percent of total consumption then — might enrage conservatives, liberals, environmentalists and climate-change skeptics alike.
But some of the changes have already been set in motion.
AMY Myers Jaffe, associate director of the Rice University Energy Program, and other Rice researchers expect a drop of 4.3 million barrels a day in the use of oil by 2025 simply through improved fuel efficiency of vehicles mandated by Congress. That equals more than a third of current imports.
An additional 2.5 million barrels a day could be eliminated by 2050 with policies to make electric cars 20 percent of the American car fleet, the Rice researchers say. Drilling for more oil domestically would reduce dependency even faster.
“We could be reaching a tipping point,” Ms Jaffe said. “Our entire oil profile could change.”
Oil imports now subtract more than a $1 billion a day from the United States balance of trade. Oil industry executives say that every additional million barrels of domestic oil produced generates a million new jobs and $30 billion in economic activity. And domestic production would reduce the flow of petrodollars that now help finance regimes unfriendly to the United States in places like Venezuela.
Because the United States has bountiful supplies of coal, natural gas and nuclear energy, as well as growing amounts of energy from renewable sources, the nation is already independent when it comes to the electrical power that heats and cools its homes and buildings and runs its factories.
With the glut in natural gas that has developed in the last five years, there is the potential to have enough to power vehicles directly or through eventual electrification of the transportation fleet. Electrical capacity could also be developed with a federal effort to extend transmission lines to link wind and solar power sources with large population centers. In the aftermath of the Japanese disaster, nuclear power will need to be put on safer footing and expanded. Future improvements in the energy efficiency of buildings could reduce the fuel they consume.
The problem the nation faces is easy to define: it’s the 19 million barrels of oil a day used by its cars, trucks and aircraft. Though the United States remains one of the largest oil producers in the world, it has been an importer since the late 1940s, with imports rising and domestic production declining fairly steadily year after year over the last quarter-century, until recently.
But a shift in the last couple of years has received little attention. Oil imports have edged lower and domestic output has increased, enough so that the United States is no longer importing 60 percent of its oil, as it was the last time oil prices were spiking four years ago.
“We’re 80 percent energy-independent to begin with, so we’re pretty far along,” said Daniel Yergin, the oil historian. “Our oil imports are down to 50 percent, and there has been a rebalancing of where we import oil from.”
Since 2007, the United States has decreased its oil imports from nations of the Organization of the Petroleum Exporting Countries by more than a million barrels a day (including 400,000 barrels less from Saudi Arabia and 300,000 less from Venezuela), while decreasing its imports from non-OPEC countries by half that much, according to the Energy Department.
During the 1970s, synthetic fuel from oil sands was little more than an experiment. Now more than 20 percent of United States oil imports come from Canada, and half of that from oil sands. That could expand considerably if the Obama administration approves the extension of the Keystone pipeline to Gulf of Mexico refineries, as expected.
Synthetic oil from Canadian oil sands is dirtier to produce than most conventional crude, but it will be produced; if the United States does not import that oil, China will. Another 10 percent of American imports comes from Mexico, and increasing amounts from Colombia and Brazil, two dependable allies. The shifting sands of oil could be seen when President Obama did not cancel his recent trip to Brazil even as allied air and naval forces went on the attack in Libya.
The potentially unstable or otherwise unreliable countries of the Persian Gulf and North Africa, Venezuela and perhaps Nigeria, supply a combined total of about five million barrels a day — about a quarter of United States consumption. That is the amount that ought to be the target.
There are several ways to replace those barrels, some of which have already been tried, with some success, in the United States and other countries. A decade of progress stretching from the early 1970s through the early 1980s is now mostly forgotten, but high oil prices drove two Republican and one Democratic administration to lower highway speeds to 55 miles an hour, divert federal funds from highways to mass transit, restrict the use of oil by utilities and oblige automakers to improve their efficiency standards.
Along with the conservation efforts, the country completed the Trans-Alaska pipeline system to bring Alaskan oil to the lower 48 states and created a Strategic Petroleum Reserve. Research and development of shale oil, geothermal, nuclear and solar energy were all increased with federal support.
The efforts of the Nixon, Ford and Carter administrations slashed oil imports in half from 1977 to 1982.
Because the country has become more dependent on oil imports today, it is easy to dismiss those efforts of a generation ago as a failure, but that would be the wrong lesson to draw.
Some advances were permanent: oil had been responsible for 15 percent of the nation’s electrical generation in 1975, consuming 1.4 million barrels a day, but now is only a trivial power source. The 1975 energy act obliged auto companies to double efficiency to 27.5 miles a gallon by 1985, saving hundreds of millions of barrels of imports over the years.
Subsequent administrations discarded many of the effective policies when oil prices collapsed and remained low through much of the 1980s and 1990s, while private investment in developing oil and unconventional fuel sources also withered. Federal budgets for research on conservation and alternative fuels were slashed. Automobile efficiency standards remained unchanged. Oil imports rocketed from 27 percent of United States oil consumption in 1985 to 60 percent in two decades since then.
Yet the same tools that worked 30 years ago — from producing more oil in Alaska to increasing biofuel production to creating more fuel-efficient cars — exist today.
“It’s become chic to say we can do nothing to solve this problem, but past success can give us hope for the future,” said Jay Hakes, director of the Carter Library and former director of the federal government’s Energy Information Agency. “We can get to energy independence, and we have begun moving in that direction already in the last three or four years.”
The high oil prices in recent years have helped the effort, and there is some evidence that gasoline usage in the United States may have peaked in 2007. With cars and trucks becoming more efficient and ethanol use expanding, American drivers will probably use less oil in the future despite predicted increases in population.
The 2007 Energy Independence and Security Act, the most serious energy legislation in a generation, went a long way toward reaching those goals. It raised auto and light truck efficiency requirements to 35 miles a gallon by 2020, from the current 27.5. It obliged producers of transportation fuels to gradually increase blending of biofuels into gasoline to replace oil, from nine billion gallons a year in 2008 to 36 billion gallons in 2022, a goal that will require the production of advanced biofuels in commercial quantities. Pilot-scale plants are working on producing various kinds of advanced cellulosic ethanol, butanol and other biofuels made out of plant and other wastes.
The results could be revolutionary, as the American vehicle fleet is replaced over the next 15 years. Several car companies are working on improvements to the internal combustion engine that could yield 50 miles to the gallon, or more, in a few years.
Genetically modified crops promise to improve yields for ethanol. Every major auto manufacturer has a hybrid or plug-in electric car planned for the marketplace, and utilities and other companies are working on building a charge-up infrastructure in cities across the country. Battery prices are coming down significantly.
recent study by the consulting firm Accenture estimated that it would be possible to replace 30 percent of gasoline demand by 2030 by adopting a fuel-efficiency standard of 40 miles a gallon over the next 20 years and gradually doubling the current blending of biofuels to 30 billion gallons by 2030.
Research breakthroughs are occurring across a wide range of alternative fuels and vehicles, but barriers to reaching commercial scale remain. Competitiveness with oil-based technologies is not guaranteed unless there is some government intervention, like subsidies or taxes on carbon.
“We are on the trajectory to reduce our imports substantially, but it’s not going to be an easy journey,” said Melissa Stark, partner and senior energy analyst at Accenture, “because the new technologies will have to compete with our very efficient oil industry. There has to be a pathway to competitiveness.”
In the meantime, natural gas has the greatest promise to replace diesel fuel in trucks. Clean Energy Fuels, a natural gas distributor, estimates that the country’s eight million trucks use up to 40 billion gallons of diesel a year. The company figures it would take five trillion cubic feet of gas a year to replace that amount of diesel, which alone would displace 2.3 million barrels of oil a day.
With natural gas reserves of 284 trillion cubic feet (and with estimates rising), the country would have little trouble producing the gas, presuming that the oil and gas industry can answer growing environmental concerns surrounding their hydraulic fracturing practices.
The government would also need to provide billions of dollars of incentives for truck companies to convert their trucks and for filling stations to install the fueling equipment.
A conversion may be beginning. United Parcel Service recently announced that it would add 48 trucks fueled on liquid natural gas to its fleet and would add more once the fueling infrastructure was in place. It took $5.5 million in government grants for the project, and more research is needed to develop pump technology and onboard storage tanks to prevent methane escapes.
Then, of course, there is the “drill baby drill” approach — not the best, many environmentalists would argue, to protect the environment and reduce climate change but one that is already working to decrease imports.
In 2009, the United States produced more oil than the year before for the first since 1985 because of the combined increase in production from deepwater Gulf of Mexico production and drilling in a giant shale field in North Dakota.
Domestic production again rose in 2010, by 3 percent, while imports have fallen slowly but steadily since 2006. Edward Westlake, a Credit Suisse managing director for energy research, calculates that the United States will be producing an additional 2.4 million barrels of oil and other liquid fuels by 2016, on top of the 8.6 million barrels a day produced in 2010, even with a natural decline in existing domestic oil fields.
At the same time he forecasts a small increase in demand for transportation fuels. “Bottom line, we’re becoming more independent but more work needs to be done,” he said. The blowout on a BP well in the Gulf of Mexico last year that left 11 workers dead and spilled millions of barrels of crude will undoubtedly slow development offshore for at least a few years, and was a setback to energy independence.
Before the spill, deepwater production in the Gulf had climbed in only a decade from a trickle to 1.2 million barrels a day, and it was expected to climb an additional 400,000 barrels a day by 2012. But the accident will surely slow drilling for several more years at least.
But offshore drilling will not be off the table forever, and oil executives believe many years of discoveries will be made once approvals start up again. As for the eastern Gulf and Atlantic coast currently closed to exploratory drilling, the federal government estimates they contain 3.8 billion barrels of oil, roughly comparable to Norway’s reserves.
Many geologists say those estimates are based on incomplete testing and seismic data that is decades old. They note that the geology of the eastern gulf is similar to coastal central and southern Mexico, where some of the hemisphere’s most productive oil fields lie. Meanwhile, parts of the Atlantic coast were connected approximately 200 million years ago to parts of the coast of West Africa like Guinea and Mauritania, where large oil fields have been found.
Oil production in Alaska, which has been in decline for decades, could be doubled in the meantime from its current yield of 680,000 barrels a day by opening up the Arctic National Wildlife Refuge and Arctic waters that are currently closed to drilling. The refuge has 10.4 billion barrels of recoverable oil, according to the federal government, and by some estimates, Alaska’s Arctic waters, now also off limits, hold 50 percent more than that.
There are compelling environmental reasons not to drill more in the Alaskan Arctic, even though low production from the aging North Slope Fields means the Trans Alaska pipeline is running at only a third of its capacity, increasing the likelihood that water will freeze in the pipes and cause corrosion and leaks on the tundra. But approving more drilling in Alaska will be a tough political slog.
More relevant is the little-known new drilling for oil in shale fields made possible by the same hydraulic fracturing and horizontal drilling techniques that have increased natural gas production.
Production from the Bakken field in North Dakota alone has risen to more than 350,000 barrels a day this year, and experts expect that will reach 800,000 barrels a day in five to seven years. Shale fields in Texas, Colorado, Wyoming and California, barely explored, have vast potential.
Pete Stark, vice president for industry relations at IHS Cera, estimates that as much as 1.5 million barrels a day may be produced by 2020 from the shale fields, which have in excess 20 billion barrels of recoverable oil — decades of productive capacity.
“That’s a million barrels of oil a day that nobody has had in their forecasts,” Mr. Stark said. “This could be the leading edge of a game changer that will provide a cushion for energy security at a time when traditional OPEC supplies are at risk.”
Some day, maybe Bill Murray won’t have to wake up again, grumpy and dissatisfied. But right now, that alarm clock is ringing.

April 1, 2011

The Indian exception

Many Indians eat poorly. Would a “right to food” help?
“LOOK at this muck,” says 35-year-old Pamlesh Yadav, holding up a tin-plate of bilious-yellow grains, a mixture of wheat, rice and mung beans. “It literally sticks in the throat. The children won’t eat it, so we take it home and feed it to the cows.”
Mrs Yadav has brought her children to a state-run nursery in Bhindusi village in rural Rajasthan. The free midday meal is being dished out. Neither she nor anyone else in Bhindusi looks plump enough to turn down such an offer. Stray dogs scamper through the nursery and toddlers are being weighed in the corner while food is passed around. Most are underweight. Mrs Yadav herself is anaemic, like almost all local women; she survives on potato curry and wheat chapatis. Even so, she rejects a free lunch. “The only reason the women come here is because of the creche,” admits Shafia Khan, who is in charge of state nurseries in the district. “The children don’t like the food. And the ones you see here are the lucky ones. Out in the fields, it is terrible. Everyone is listless; they all suffer from vitamin and iron deficiencies.”
The nursery is part of India’s Integrated Child Development Services Scheme (ICDS), the largest child-nutrition programme in the world. Its woes in Rajasthan are part of a larger problem. India is an outlier. Its rate of malnutrition—nearly half the children under three weigh less than they should—is much higher than it should be given India’s level of income. And the burden has shifted more slowly than it ought to have done given Indian growth. Lawrence Haddad, the director of the Institute of Development Studies at Sussex University, reckons that every 3-4% increase in a developing country’s income per head should translate into a 1% fall in rates of underweight children. In India the rate has barely shifted in two decades of growth. Per person, India eats less, and worse, than it used to. Mr Haddad calls the country the world’s Jekyll and Hyde: economic powerhouse, nutritional weakling. Over a third of the world’s malnourished children live there.
When India was poor, its failure to feed itself properly did not seem odd. Poverty was explanation enough. But after one of the most impressive growth spurts in history, the country’s inability to lift the curse of malnutrition has emerged as its greatest failure—and biggest puzzle. Nothing fully accounts for it. True, farming has not shared in the same dazzling success as the rest of the economy, lately rising by only a point or two per person per year. But some African countries have seen farm output per head actually fall—and they have still cut malnutrition more than India.
It is also true that India’s food bureaucracy is a byword for inefficiency and corruption. People steal from the cheap-food shops of the Public Distribution System (PDS) on an industrial scale. Newspapers call a case of theft now under investigation in Uttar Pradesh “the mother of all scams”. At one point, the country’s top investigative agency said it had given up even trying to cope with the 50,000 separate charges. But again, other countries have corrupt bureaucracies, too—or none, which may be as bad.
So the most convincing explanations for India’s nutritional failures probably lie elsewhere. Women are the most important influences upon their children’s health—and the status of women in India is notoriously low. Brides are deemed to join their husband’s family on marriage and are often treated as unpaid skivvies. “The mothers aren’t allowed to look after themselves,” says Mrs Khan. “Their job is simply to have healthy babies.” But if mothers are unhealthy, their children frequently are, too.
India is also riven by caste and tribal divisions. It is no coincidence that states with the mostdalits (former untouchables) or tribes (such as Bihar and Orissa) have higher malnutrition rates than those, like Andhra Pradesh and Kerala, with fewer of these excluded groups. So-called scheduled castes and tribes are more likely than other Indians to suffer the ills of poor diet.
But that cannot be the whole story. Astonishingly, a third of the wealthiest 20% of Indian children are malnourished, too, and they are neither poor nor excluded. Bad practice plays some part—notably a reluctance to breastfeed babies. There may also be an element of choice. Long ago, a study in Maharashtra showed that people spend only two-thirds of their extra income on food—and this is true whether they are middle-income or dirt-poor. That may seem perverse. But a mobile phone may be more useful to the poor than better food, since the phone may generate income during the next harvest failure, and good food will not.
Wanted: Bolsa India
These explanations matter because they raise questions about the Indian government’s current attempt to offer a universal “right to food”. Over the past 20 years, the supreme court has said that Indians have various social rights (to work, education and so on) and can sue the government if they are not honoured. The free school-meal programme was an attempt to implement a right to food. Now the government wants to go further. It is talking about giving cheap food to about 90% of country-dwellers and 50% of city folk—three-quarters of all Indians.
Leave aside the budgetary implications, which are awe-inspiring. Such a programme would hugely expand the terminally dysfunctional PDS. It would do little or nothing for neglected castes and tribes. It would not raise the status of women, or encourage breastfeeding and early nutrition. (As Mrs Khan says, “the crucial time is between the ages of nought and three, but we’re not really reaching them.”) Giving cash, rather than food itself, would be better. Better still, India should look to international experience and introduce a conditional cash-transfer scheme, such as Brazil’s Bolsa Família, which pays the mother if her children attend school. India hankers after “universal” benefits that would leave millions malnourished. It should instead learn from schemes that target those who need help—and which actually work.