Sensor Fusion Techniques for Robust Autonomous Navigation in Unstructured Environments

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Autonomous navigation within unstructured environments is a very challenging task due to sensor noise, environmental variability, and the constraints of single-modality perception. This paper discusses a robust multi-sensor fusion framework that combines data from LiDAR, camera, inertial measurement unit, and global navigation satellite system for accurate real-time navigation in diverse and uncertain conditions. It merges the probabilistic estimation through Extended Kalman Filter with a deep learning-based feature extraction layer for adaptive confidence weighting and nonlinear error minimization. Sensor data are preprocessed, temporally synchronized, and fused using an adaptive covariance model, which dynamically adapts to signal degradation or occlusion. Experimental validation across indoor, outdoor, and semi-urban terrains showed 50% improved localization accuracy and a 10% reduction in drift compared to classical fusion frameworks. The method resulted in high map consistency of 96.8%, and reliable performance was noticed for partial GNSS loss, proving that it can be used on embedded robotic platforms in real time. This study establishes that intelligent sensor fusion with adaptive uncertainty modeling forms the basis for resilient autonomy in unstructured and dynamic environments.

Article activity feed