I-PAttnGAN: An Image-Assisted Point Cloud Generation Method Based on Attention Generative Adversarial Network
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The key to building a 3D point cloud map is to ensure the consistency and accuracy of point cloud data. However, the hardware limitations of LiDAR lead to a sparse and uneven distribution of point cloud data in the edge region, which brings many challenges to 3D map construction, such as low registration accuracy and high construction errors in the sparse regions. To solve these problems, this paper proposes the I-PAttnGAN network to generate point clouds with image-assisted approaches, which aims to improve the density and uniformity of sparse regions and enhance the representation ability of point cloud data in sparse edge regions for distant objects. I-PAttnGAN uses the normalized flow model to extract the point cloud attention weights dynamically and then integrates the point cloud weights into image features to learn the transformation relationship between the weighted image features and the point cloud distribution, so as to realize the adaptive generation of the point cloud density and resolution. Extensive experiments are conducted on ShapeNet and nuScenes datasets. The results show that I-PAttnGAN significantly improves the performance of generating high-quality, dense point clouds in low-density regions compared with existing methods: the Chamfer distance value is reduced by about 2 times, the Earth Mover’s distance value is increased by 1.3 times, and the F1 value is increased by about 1.5 times. In addition, the effectiveness of the newly added modules is verified by ablation experiments, and the experimental results show that these modules play a key role in the generation process. Overall, the proposed model shows significant advantages in terms of accuracy and efficiency, especially in generating complete spatial point clouds.