Resilient Localization for Mobile Robots Using Multi-Sensor Fusion and a Hybrid Learning-Filtering Framework

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate and resilient localization is important for autonomous mobile robots in the real world along with diverse environments. Single-sensor approaches are vulnerable to noise and failures and unreliability for diverse environments. In this study, a novel technique is presented with a comprehensive multi-sensor fusion framework using a filter technique and machine learning to enhance localization accuracy and resilience. Different sensors are fused with different characteristics, so the system can execute in different environments, such as daytime, nighttime, indoor, and outdoor, using LiDAR, visual cameras, GPS, UWB, and IMU sensors. In this study, three strategies are used: (i) a baseline standard Extended Kalman Filter (EKF); (ii) a novel dual-stage iterative EKF that refines LiDAR-Inertial Odometry (LIO) and Visual-Inertial Odometry (VIO) separately; and (iii) a hybrid learning and filtering technique, where Long Short-Term Memory (LSTM) is used for predicting robot velocities/positions, while EKF is employed for integrating the predicted result. Significant improvements were demonstrated through experimental validation on a simulated robot in ROS-Gazebo. The value of the localization error of the standard EKF was 0.226 m (Root Mean Square Error (RMSE)). The RMSE of post-EKF in dual-stage EKF are better than standard EKF with a value of 0.014 m. The hybrid LSTM-EKF model achieved an RMSE of 0.197 m during failures compared to 0.144 m during smooth operation. This framework offers a scalable solution for reliable autonomous navigation by greatly improving localization accuracy and resilience against sensor failure and degradation.

Article activity feed