Lidar-Inertial SLAM Method Integrated with Visual QR Codes for Indoor Mobile Robots

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multi-modal sensor fusion-based LiDAR SLAM is a critical capability for mobile robots to achieve autonomous navigation in complex environments. However, challenges persist when operating in indoor settings with features such as long corridors, dynamic objects, repetitive or symmetric structures, highly similar scenarios, or sparse features, which often result in decreased localization accuracy and cumulative errors. To address these issues, we propose a LiDAR-inertial SLAM method enhanced with visual QR codes. Specifically, our approach constructs a SLAM framework comprising a front-end LiDAR-inertial odometry module based on an Extended Kalman Filter (EKF) and a back-end global factor graph optimization. By incorporating visual QR codes as additional landmarks in the back-end, this framework not only supplies extra localization references for LiDAR SLAM, but also improves system stability in feature-sparse environments. In this study, we integrate environmental features derived from multiple sensors, effectively boosting the accuracy and robustness of both robot localization and mapping while mitigating localization errors caused by insufficient LiDAR features. The proposed system has been extensively tested in various indoor scenarios. Experimental results demonstrate that our LiDAR-inertial SLAM method, which incorporates visual QR codes, significantly enhances localization accuracy and bolsters the stability and robustness of map construction under feature-sparse and dynamic indoor conditions.

Article activity feed