ST-CDPRN: 3D Point Cloud Reconstruction Method With Self-Training Conditional Diffusion

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The method of using images for 3D reconstruction is currently the focus of research as a bridge to connect the 2D and 3D worlds. However, this image-based reconstruction method relies heavily on data volume, making it difficult to achieve high-quality reconstruction in small data situations. Recently, Self-training has received attention due to its high efficiency in small data sizes. This article investigates how to develop a semi supervised 3D reconstruction method. Specifically, our proposed detector addresses the issues of insufficient utilization of image information and lack of details in the reconstructed point cloud during the 3D reconstruction process. On the basis of the conditional diffusion 3D point cloud reconstruction network model, we propose a 3D point cloud reconstruction method with self training conditional diffusion (ST-CDPRN) to address the scarcity of 3D point cloud labeled samples and the high dependence of the model on a large amount of labeled data in practical applications. The proposed framework ensures robust reconstruction performance in the case of limited labeled data. The comparative and ablation experiments results between ShapeNet and CO3D datasets show that the 3D point cloud reconstruction method with self-training conditional diffusion has an overall performance improvement of 11.35% compared to ordinary methods.

Article activity feed