A Multi-Source Fusion 3D Reconstruction Method for Laser Demolition Rescue Environments

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper proposes a multi-source fusion based 3D reconstruction method to address the insufficient real time performance and low reliability of scene modeling in laser demolition rescue scenarios, where complex environmental factors such as structural collapse, smoke occlusion, and dense dynamic obstacles are common. First, spatiotemporal synchronization and calibration are performed on LiDAR, infrared camera, and IMU sensors to ensure data consistency. Second, a multimodal feature extraction and fusion network is designed to integrate complementary sensor data at the feature level, incorporating an attention mechanism to enhance informative features and suppress noise. Finally, using a real-time optimized reconstruction algorithm, the fused data enable accurate, complete, and dynamic 3D reconstruction of the rescue scene, supporting reliable decision-making and path planning. Experimental results show that the proposed method outperforms single-sensor approaches across key metrics. The reconstruction RMSE is reduced to 11.233, representing reductions of 44.4% and 60.0% compared to single LiDAR and single infrared methods, respectively. This demonstrates that the proposed method achieves comprehensive optimization in terms of 3D reconstruction accuracy, completeness, and real-time performance under laser demolition rescue conditions.

Article activity feed