Governing Generative AI in Disaster Risk Management

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The increasing frequency and severity of climate-related disasters, as well as scarcity of resources to counter them, highlight the urgent need for advanced tools in assessing and managing natural hazards. Recent developments in generative artificial intelligence offer new avenues to enhance disaster risk management. Among these advancements, large language models (LLMs) hold potential for improving situational awareness, risk management, and the communication of early warnings and forecasts. Additionally, new forms of agentic AI expand these capabilities by combining LLMs with memory, planning, and tool use, enabling them to support operational decisions even more effectively. While AI's role in forecasting and risk modeling is well-explored, GenAI brings new urgent challenges concerning bias, explainability, fair access, and trust. In this perspective piece, we critically examine both the operational potential and the ethical challenges of integrating GenAI into disaster risk workflows, focusing on how these technologies can support practitioners and policymakers. Drawing on recent literature, expert discussions, and a dedicated survey that was distributed during a disaster-related event from the European Commission, we underline the necessity of embedding human oversight, transparency, and cultural sensitivity into such systems. We stress that realizing the advantages of GenAI will require coordinated collaboration across different fields, improved interdisciplinary capacity building, and policy frameworks that ensure reliability, fairness, and practical usefulness from design to deployment.

Article activity feed