A multimodal deep learning method of weld defect detection based on 3D point cloud

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Weld quality inspection is essential in modern manufacturing, requiring the automatic identification, localization, and measurement of defects in industrial environments. Although 2D images and 3D point clouds each have their unique advantages, most current inspection methods focus on only one of these data types. This study proposes a novel system integrating 3D point cloud data with 2D images using PointNet + + and YOLOv5. The 3D point cloud data is mapped into corresponding 2D feature maps and trained separately. Training results show that PointNet + + achieved an accuracy of 98.9% and an IoU of 79.3%, while YOLOv5 achieved an precision of 98.9%, a recall of 97.6%, a mAP@0.5 of 98.8%, and a mAP@0.5:0.95 of 72.2%. By combining the results of both models, the 2D bounding boxes from YOLOv5 are mapped back into 3D space and integrated with PointNet + + results to create 3D bounding boxes. Reassigning the defect point class weights within each 3D bounding box helps resolve issues where PointNet + + might classify points from a single defect into multiple classes. The proposed method in this study demonstrated an improvement on a test set of 100 samples in mIoU from 60.2–63.0% compared to using PointNet + + alone, resulting in effective identification and measurement of spatter, porosity, and burn-through.

Article activity feed