Hyderabad:MIT scientists have developed a new tool that not only predicts how a region would be affected by a flooding event but also depicts the potential future with satellite imagery. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.
The team tested this method on Houston by generating satellite images of the city post-storm, similar to Hurricane Harvey in 2017, and compared them with actual satellite images and AI-generated images without a physics-based flood model. As expected, the team’s physics-reinforced method produced more realistic and accurate satellite images of future flooding, unlike the AI-only method, which depicted impossible flood locations.
This proof-of-concept shows that combining generative AI with physics-based models can generate trustworthy content, but further training on additional satellite images is needed for broader applications.
Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences who led the research as a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro), suggested that one day, their method could be used before a hurricane to provide an additional visualization layer for the public. He mentioned that one of the biggest challenges is encouraging people to evacuate when they are at risk and expressed hope that this visualisation could help increase that readiness.
The team has named their new method the "Earth Intelligence Engine" and made it available online for public use. Their findings, including contributions from MIT co-authors and collaborators from various institutions, are published today in the journal IEEE Transactions on Geoscience and Remote Sensing.
Newman, the study’s senior author, explained that providing a hyper-local perspective of climate is an effective way to communicate scientific results. People tend to relate more to their own zip code and the local environment, where their family and friends live. Local climate simulations, therefore, become intuitive, personal, and relatable.
The study uses a conditional generative adversarial network (GAN) with two competing neural networks to generate realistic images. The generator network is trained on real data pairs, and the discriminator network distinguishes between real and synthesized images. This process aims to create indistinguishable synthetic images, though "hallucinations," or incorrect features, can still occur. Lütjens highlighted the challenge of avoiding these hallucinations to ensure generative AI tools provide trustworthy information, especially in risk-sensitive scenarios.