CNN-based Multi-Object Detection and Segmentation in 3D LiDAR Data for Dynamic Industrial Environments
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Autonomous navigation in dynamic environments presents a significant challenge for mobile robotic systems. In this paper, we propose a novel approach utilizing Convolutional Neural Networks (CNNs) for multi-object detection in 3D space and 2D segmentation using Bird’s Eye View (BEV) maps derived from 3D LiDAR data. Our method aims to enable mobile robots to localize movable objects and their occupancy, crucial for safe and efficient navigation. To address the scarcity of labeled real-world datasets, a synthetic dataset based on simulation environment is generated to train and evaluate our model. Additionally, we employ a subset of the NVIDIA r2b dataset for evaluation in real-world. Furthermore, we integrate our CNN-based detection and segmentation model into a ROS2-based framework, facilitating communication between mobile robots and a centralized node for data aggregation and map creation. Our experimental results demonstrate promising performance, showcasing the potential applicability of our approach in future assembly systems. While further validation with real-world data is warranted, our work contributes to the advancement of perception systems by proposing a solution for multi-source multi-object tracking and mapping.