LR-SLAM: An Efficient Dynamic SLAM System for Low-Resolution RGB-D Cameras

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In the context of Simultaneous Localization and Mapping (SLAM) in dynamic environments, a pivotal challenge is the simultaneous negation of environmental dynamics' impact without incurring excessive resource consumption and diminishing real-time performance. Existing solutions typically focus on enhancing algorithmic precision and efficiency, often overlooking the significance of addressing these challenges from the perspective of sensor equipment. To address this issue, we introduce LR-SLAM, a vision-based SLAM system tailored for indoor dynamic environments and compatible with low-resolution cameras. This system achieves computational and energy efficiency while ensuring satisfactory localization accuracy without the need for high-resolution cameras. LR-SLAM employs the GCNv2 network for feature extraction and integrates an adaptive non-maximum suppression algorithm based on range trees for feature point uniformization. Furthermore, we propose a dynamic feature point elimination strategy that combines lightweight object detection, epipolar constraints, and probabilistic modeling. Tested on the TUM datasets, as well as in real-world scenarios, LR-SLAM has demonstrated exceptional localization accuracy and robustness in dynamic environments, relying solely on low-resolution cameras. This research not only offers a new perspective and solution in the field of dynamic SLAM but also significantly expands the application potential of low-resolution cameras in this domain.

Article activity feed