News Release

AI technique boosts climate change defenses

Peer-Reviewed Publication

Princeton University, Engineering School

Floating house

image: 

A house, being relocated to higher ground, is transported by barge through Little Egg Inlet along New Jersey’s southeastern coast.

view more 

Credit: Matthew Drews, Rutgers University

Researchers from Princeton and Rutgers University have used reinforcement learning, a method frequently deployed to train artificial intelligence, to show how flexible responses can substantially increase the cost-effectiveness of steps to defend cities like New York against climate change.

The research is part of an attempt to grapple with the effort to make expensive, long-term investments to mitigate the impacts of climate change. The substantial uncertainty related to long-term climate change makes it difficult for political leaders to make investments now that are designed to protect citizens for decades or longer. The difficulty is enhanced by the vast number of variables that go into any such decision and by the fact that the variables are likely to shift in unforeseen ways.

In a March 18 article in the Proceedings of the National Academy of Sciences, the researchers looked at flooding, which has caused increasing damage along the coastal United States and around the world. Governments are building coastal defenses against flooding, but they cannot rely on past conditions to guide defenses that will be needed in the future.

“Defenses are being built to protect coastal regions for the next few decades or longer,” said co-author Ning Lin, a professor of civil and environmental engineering at Princeton. “Climate projects are largely uncertain over long time horizons.”

Lin said that to deal with this uncertainty, planners must be flexible and ready to adapt their plans to future observation of climate conditions. Although this is extremely challenging because of the complexity of climate science, Lin said that harnessing advances in data science can provide an effective strategy.

Robert Kopp, a co-author of the study and a distinguished professor of Earth and Planetary Sciences at Rutgers said uncertainty about the impact of melting ice sheets on sea levels has led to “controversy about how planners should consider the possibility of rapid ice-sheet loss.

Kopp said that flexible approaches can help communities prepare for worst-case scenarios without paying too much for protection.  “Planning for high-end sea-level rise costs a lot, and there’s a good chance it won’t be necessary, but failing to plan for it can be devastating,” he said.

In the PNAS article, the team describes how they simulated efforts to defend Manhattan against sea level rise through the end of this century. The goal was to determine whether any decision-making process that systematically incorporates observations and updating would prove superior to others over such a long period of time. To do this, the researchers simulated decisions by city planners in 10-year intervals up to the year 2100. The researchers compared their decision-making process with existing methods. For example, using the static method of building a seawall for a historic 100-year flood plus a sea-level-rise projection — as currently applied by New York and other coastal cities — is one method; designing a dynamic seawall that will be increased in height over time according to projected future climate change is another.

The researchers graded each method by its efficiency — the cost of defenses plus the estimated damage caused by flooding. For example, spending $10 million on a seawall that allowed $50 million in property damage ($60 million cost) would be less efficient than spending $30 million on a seawall with $15 million in property damage ($45 million cost). 

The researchers found that calculations of a dynamic seawall based on reinforcement learning that systematically incorporates observations of sea level rise over time increased efficiency when compared with other methods (detailed in the paper). Compared with other systems, reinforcement learning lowered costs by 6-36% in a scenario modeling climate change under moderate carbon emissions. For a high emissions scenario, the decrease was 9-77%.

Reinforcement learning is a type of machine learning in which a program makes decisions and receives positive reinforcement based on results. Designers train the program by running it through vast amounts of simulated decisions, and it learns by trial and error rather than through explicit instructions from programmers. This is essentially the way many AI systems operate. It is particularly effective for situations that are extremely complex and subject to rapid changes over time. Computer scientists have used reinforcement learning to train AI to perform tasks such as playing chess, driving cars, and controlling robots and drones. The method has also been used for large systems used to store power or control water supplies.

For their case study, the researchers looked at defenses proposed for low-lying areas in Manhattan. After Hurricane Sandy caused tremendous damage in New York and New Jersey in 2012, the U.S. Army Corps of Engineers proposed constructing a series of seawalls to defend Manhattan called the Big U. Some sections of the Big U are being built and others are in the planning stage. But a final completion plan for the entire system has not yet been set.

The study looked at proposals for the Big U and how defenses against coastal flooding should be changed to respond to future threats to New York from the sea. Flooding is driven both by sea level rise and storms. Both are affected by climate change, which in turn is impacted by global carbon emissions. The researchers wanted to evaluate decision methods every 10 years and allow for adjusting the Big U to reflect available threat data at each interval.

Because the impacts of climate change over a long period remain uncertain, the researchers wanted to study methods of making the best decisions for future changes with the data available at the time. Traditionally, engineers have built protective systems like seawalls and levees to resist historic floods, building protection from floods that would occur only once in 50 or 100 years. But because the climate is changing, such systems no longer work. Building the Big U seawalls to match the highest storm surge over the past 100 years would leave Manhattan vulnerable as climate change drives higher storm surge levels.

The researchers looked at a number of decision-making methods that take into account changing conditions. Most methods allowed for changes based on key variables and some allowed planners to make future projections that would also influence decisions. For example, as a temporary measure, residents could flood-proof their homes (“accommodate”), but eventually, higher sea levels would necessitate a high seawall (“protect”). In some cases, a seawall could prove too costly to protect everyone and residents would be encouraged or compelled to leave threatened communities (“retreat”). In a situation with only a few variables to account for, the benefits of a plan can be estimated relatively easily. But uncertain climate change presents an extremely complex scenario. The researchers showed that reinforcement learning can be used to design integrated strategies, including retreating from low-lying areas, protecting property further inland at higher elevation, and accommodating in between, with a 5-15% reduction in cost compared to the one-dimensional seawall strategy.

Lin, one of the lead researchers, said defending Manhattan is not only complex, but it also requires making difficult decisions under uncertain conditions. For each time interval, planners must make decisions based on observed sea level rise and roughly 80,000 scenarios of future sea level rise and corresponding decision reactions. The difficulty of the decisions compounds as the number of intervals increases.

While climate adaptation decisions are not simple, Lin said reinforcement learning is  a highly efficient system for incorporating observations and updating plans to derive optimal solutions for limiting impacts from extreme events. Reinforcement learning also outperforms previous methods by avoiding losses induced by uncertain changes in future global carbon emissions.

“The analysis of New York City’s situation is by no means unique,” said Michael Oppenheimer, a study co-author and professor of geosciences and international affairs at Princeton. “The method can be applied widely, although its benefit compared to other systems of analysis would vary from place to place.”


The paper, “Reinforcement learning-based adaptive strategies for climate change adaptation: An application for coastal flood risk management,” was published March 18 in the Proceedings of the National Academy of Sciences. Besides Kopp, Lin and Oppenheimer, co-authors include Kairui Feng, formerly of Princeton and now at Tongji University, Shanghai, China; and Siyuan Xian of Princeton. The authors are part of the Megalopolitan Coastal Transformation Hub, a Rutgers-led, National Science Foundation-funded consortium of research institutions working to advance the science of how coastal climate hazards, landforms and human decisions interact to shape climate risk and advance climate adaptation in the New York City-New Jersey-Philadelphia region.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.