Robot Observation Pose Optimization for Active Object SLAM with Ellipsoid Model and Camera Field of View

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As SLAM technology has evolved from the geometric level to the object level, Active SLAM (ASLAM) has also adopted a new goal: improving the ability to observe objects. However, current ASLAM mostly focuses on low-dimensional environmental features such as points, lines, and planes, while ignores the impact of motion process on SLAM. This paper proposes a new observation pose optimization method based on the ellipsoid model and the camera field of view to enhance the robot’s ability to observe objects. We integrate view planning and motion planning into a unified observation pose optimization module and construct an optimized factor graph based on probabilistic inference. Our method ensures the observation poses are globally optimal and can directly generate robot control variables. Leveraging the camera’s field of view and the ellipsoid model abstracted from object SDF model, we introduce three key factors into the factor graph: object completeness observation factor, self-observation prevention factor, camera motion smoothness factor. Finally, we develop the object ASLAM system with our observation pose optimization method, and evaluate it in multiple simulation environments. Experimental results demonstrate that our method significantly improves object modeling accuracy, mapping efficiency, and localization precision. The code for this work are open-sourced at https://github.com/TINY-KE/OPO_ASLAM.git.

Article activity feed