ZLY-SLAM: RGB-D dense semantic SLAM system based on point and line features in dynamic environments
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Visual simultaneous localisation and map construction (Visual SLAM) is a key technology that enables a robot to construct a map of its environment from visual data and simultaneously determine its position in the map. However, traditional SLAM systems typically assume a static environment and exhibit significant limitations in dynamic scenarios, limiting the breadth of their practical applications. To address this problem, this paper proposes an RGB-D dense semantic SLAM system based on point and line features. The system introduces a ZippyPoint feature network in the feature extraction stage to replace the traditional ORB features, and enhances the robustness and matching accuracy in complex scenes by combining with LSD line features. Meanwhile, fine semantic segmentation of dynamic objects is performed by integrating the YOLOv9c-seg network, and a dynamic feature filtering method combining semantic information and polar geometric constraints is proposed, which effectively filters out the dynamic point and line features, and significantly reduces the positioning error. To make up for the deficiency of sparse point cloud maps in detail restoration, low-noise and high-precision dense point cloud maps are successfully constructed through the strategies of ghosting elimination and filter optimisation. Experimental results on the TUM RGB-D dataset and homemade dataset show that the proposed system in this paper improves the localisation accuracy in TUM high dynamic sequences by an average of 97.22% compared to ORB-SLAM3, and it also exhibits good adaptability and stability in the homemade dataset, with a comprehensive performance superior to that of many other state-of-the-art dynamic SLAM systems.