Monday, December 23, 2024

A up-to-date AI tool generates realistic satellite images of future floods

Share

Visualizing a hurricane’s potential impact on people’s homes before it hits can assist residents prepare and decide to evacuate.

MIT researchers have developed a method to generate future satellite images to visualize what the region would look like after a potential flood. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic bird’s-eye images of the region, showing where flooding is likely to occur given the strength of an approaching storm.

As a test, the team applied this method to Houston and generated satellite images showing what specific locations in the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken in those same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.

The team’s physics-based method generated more realistic and true satellite images of future floods. However, the method using only artificial intelligence generated flood images in places where flooding is not physically possible.

The team’s method aims to be a proof-of-concept and demonstrate a case where generative AI models can generate realistic, trustworthy content when combined with a physics-based model. To apply this method to other regions to depict flooding from future storms, it needs to be trained on many more satellite images to learn what flooding will look like in other regions.

“The idea is: One day we could use this before a hurricane, where it would provide society with an additional layer of visualization,” says Björn Lütjens, a postdoctoral fellow in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Perhaps this can be another visualization to help increase that readiness.

To illustrate the potential of the new method, which they called the “Earth Intelligence Engine,” the team created it available as an online resource for others to try.

Researchers will publish their results in the journal today . Co-authors of the study from MIT include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, AeroAstro professor and director of the MIT Media Lab; together with colleagues from many institutions.

Generational adversarial images

The new study continues the team’s efforts to apply generative artificial intelligence tools to visualize future climate scenarios.

“Providing a hyperlocal perspective on climate appears to be the most effective way to communicate our scientific results,” says Newman, senior author of the study. “People relate to their own postcode, the local environment where their family and friends live. Delivering local climate simulations becomes intuitive, personal and relatable.”

In this study, the authors employ a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing or “adversarial” neural networks. The first “generator” network is trained on pairs of real-world data, such as satellite images before and after a hurricane. Then a second “discrimination” network is trained to distinguish real satellite images from those synthesized by the first network.

Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such opposing pressures should ultimately result in the creation of synthetic images that are indistinguishable from real ones. However, GANs can still cause “hallucinations” or non-factual features in a realistic image that shouldn’t be there.

“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided so that generative artificial intelligence tools could be trusted to assist inform people, especially in risk-sensitive scenarios. “We wondered: How can we use these generative AI models in the context of climate impacts, where it’s so important to have trusted data sources?”

Flood hallucinations

In their up-to-date work, the researchers considered a risk-sensitive scenario in which generative artificial intelligence is designed to create satellite images of future floods that could be reliable enough to enable decisions to be made about how to prepare and potentially evacuate people away from danger.

Typically, decision-makers can get an idea of ​​where flooding may occur based on visualizations in the form of color-coded maps. These maps are the end product of a series of physical models that typically start with a hurricane track model, which then feeds into a wind model that simulates the distribution and strength of winds in the local region. This is combined with a flood or storm model that predicts how winds might push a nearby body of water onto land. The hydraulic model then maps where flooding will occur based on local flood infrastructure and generates a visual, color-coded flood elevation map for the specific region.

“The question is: Can satellite image visualizations add another layer to this that is a little more tangible and emotionally engaging than a color-coded map of red, yellow and blue, while still being trustworthy?” – says Lütjens.

The team first tested how generative AI itself could create satellite images of future floods. They trained the GAN using actual satellite images taken by satellites as they flew over Houston before and after Hurricane Harvey. When they asked the generator to create up-to-date images of flooding in the same regions, they discovered that the images resembled typical satellite images, but a closer look revealed hallucinations of flooding in some images where flooding should not be possible (for example, in higher altitude locations). .

To reduce hallucinations and augment the reliability of AI-generated images, the team combined the GAN with a physics-based flood model that takes into account real parameters and physical phenomena, such as the trajectory of an approaching hurricane, storm surge, and flood patterns. Using this physics-enhanced method, the team generated satellite images around Houston that show the same flood extent, pixel by pixel, as the flood model predicted.

“We are demonstrating a tangible way to combine machine learning with physics in a risk-sensitive use case that requires us to analyze the complexity of Earth’s systems and design future actions and possible scenarios to protect people from danger,” says Newman. “We look forward to putting our generative AI tools in the hands of decision-makers at the local community level, which could make a significant difference and possibly save lives.”

The research was supported in part by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA and Google Cloud.

Latest Posts

More News