Rapid post-earthquake building damage mapping with object-oriented SAM fine-tuning: a case study of the 2023 Jishishan earthquake

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Earthquakes pose severe risks to lives and infrastructure, demanding rapid and accurate building-damage assessments to guide response. High-spatial-resolution (HSR) imagery is essential for this task, yet conventional convolutional neural networks (CNNs) can be limited in capturing long-range dependencies, especially in dense urban fabrics. We introduce S2AC-Net, an operational framework that fuses global context with local features for post-earthquake damage mapping. In S2AC-Net, the Segment Anything Model (SAM) replaces traditional multi-resolution segmentation for pre-event imagery: a frozen Vision Transformer encoder with lightweight adapters produces building-probability maps, which are merged with SAM masks to delineate building footprints. Spectral and texture features from pre- and post-event images are then mapped to these objects to form feature vectors, which—together with field survey labels—drive a CNN classifier for multi-class damage grading. Applied to the December 2023 Jishishan earthquake (Gansu, China), S2AC-Net achieved 0.928 overall accuracy for building localization and an overall F1 of 0.882 for damage assessment. These results demonstrate that integrating SAM-based global context with object-level feature fusion overcomes key limitations of purely CNN-based pipelines and provides a scalable pathway for rapid, reliable remote-sensing–based disaster assessment and urban resilience planning.

Article activity feed