Thursday, May 8, 2025

Ensuring the relevance of climate models to local decision-makers

Share

Climate models are a key technology for predicting the effects of climate change. By simulating Earth’s climate, scientists and policymakers can estimate conditions such as sea level rise, flooding and temperature increases and make decisions about the appropriate response. However, current climate models struggle to provide this information quickly and cheaply to be useful at smaller scales, such as the size of a city.

Now the authors of A new public document published in found a method to utilize machine learning to leverage the benefits of current climate models while reducing the computational costs needed to run them.

“This turns traditional wisdom upside down,” says Sai Ravela, principal scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), who wrote the paper with EAPS postdoctoral fellow Anamitra Saha.

Classic wisdom

In climate modeling, downscaling is the process of using a coarse-resolution global climate model to generate finer details in smaller regions. Imagine a digital image: the global model is a vast image of the world with a compact number of pixels. To reduce scale, zoom in only on the part of the photo you want to look at – for example, Boston. However, since the original photo was low resolution, the up-to-date version is blurry; doesn’t provide enough detail to be particularly useful.

“If you go from coarse resolution to fine resolution, you have to add information in some way,” explains Saha. Downscaling involves adding this information back by filling in the missing pixels. “Adding information can happen in two ways: either it can come from theory, or it can come from data.”

Conventional downscaling often involves taking models built on physics (such as the process of air rising, cooling and condensation, or the landscape of an area) and supplementing it with statistical data drawn from historical observations. However, this method is computationally intensive: it takes a lot of time and computing power to run, and at the same time is steep.

A little bit of both

In their up-to-date paper, Saha and Ravela found a way to add data in a different way. They used a machine learning technique called adversarial learning. It uses two machines: one generates data that goes to our photo. But the second machine evaluates the sample by comparing it with the actual data. If it thinks the image is fraudulent, the first machine must try again until it convinces the second machine. The end goal of the process is to create super-resolution data.

The utilize of machine learning techniques such as adversarial learning is not a up-to-date idea in climate modeling; currently struggling with the inability to deal with a lot of basic physics topics, such as the laws of conservation. Scientists found that simplifying the physics and supplementing it with statistics from historical data was enough to get the results they needed.

“If you enrich machine learning with some information from statistics and simplified physics, suddenly it becomes magical,” Ravela says. He and Saha began by estimating the amount of extreme rainfall, removing more intricate physical equations and focusing on water vapor and land topography. They then generated overall rainfall patterns for both mountainous Denver and flat Chicago, using historical calculations to correct the results. “It gives us extremes, like physics, at a much lower cost. It gives us statistical-like speeds, but at a much higher resolution.”

Another unexpected benefit of the results was that little training data was required. “The fact that all it took was a little bit of physics and a little bit of statistics to improve ML performance [machine learning] the model… was actually not obvious from the beginning,” says Saha. Training only takes a few hours and you can get results in minutes, which is an improvement over the months it takes other models to run.

Quick risk assessment

The ability to run models quickly and frequently is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: by seeing how extreme weather events will affect the country, decisions about what crops should be grown or where the population should migrate can be made as quickly as possible given a very wide range of conditions and uncertainties .

“We can’t wait months or years to be able to quantify this risk,” he says. “You have to look far into the future and pay attention to many uncertainties to be able to tell which decision might be the right one.”

While the current model only accounts for extreme rainfall events, the next step in the project is to train it for other critical events such as tropical storms, winds and temperature. With Ravel’s more hearty model, he hopes to apply it to other places, such as Boston and Puerto Rico, as part of the project Project “Grand Climate Challenges”..

“We are very excited about both the methodology we have developed and the potential applications it can lead to,” he says.

Latest Posts

More News