An accurate and robust RGB-D visual SLAM method in dynamic environments

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Traditional SLAM systems have achieved good robustness in static environments. However, the presence of dynamic objects in real-world environments can significantly reduce their localization accuracy. This paper introduces a dynamic SLAM system that combines semantic segmentation with epipolar constraints. The system can detect and remove dynamic points in dynamic environments, achieving good localization performance and generating dense point cloud maps. Initially, the system employs the YOLOv5 deep learning network to extract semantic information from images, thereby generating prior semantic masks for dynamic objects. Subsequently, a novel method for eliminating dynamic feature points is introduced. This method utilizes an adaptive threshold correlated with depth, integrating semantic prior information and epipolar constraint geometric information to further assess the motion state of feature points, thereby removing dynamic points. Finally, a dense point cloud map is produced in dynamic environments by integrating depth information with semantic information. Experiments conducted on the TUM dataset indicate that, compared to ORBSLAM2, the proposed system achieves an average localization accuracy improvement of 95\% in highly dynamic sequences, demonstrating the algorithm's effectiveness in enhancing localization accuracy in dynamic environments.

Article activity feed