Real-Virtual Data Fusion and YOLO11-based Segmentation for Automated Deposition Region Recognition in L-DED Repair Process
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Reliable recognition of damaged regions is essential for automated L-DED repair, yet conventional pre-process planning often relies on costly 3D scanning and CAD/CAM alignment. We propose an on-machine, single-RGB-camera framework using YOLO11 instance segmentation with real–synthetic hybrid training data generated via physics-based rendering to address labelled data scarcity. Using a 28-image unseen-object (LOTO) test set from a blade-inspired specimen excluded from training, Hybrid-YOLO11m achieved 94.2% IoU and 97.0% F1, outperforming real-only, synthetic-only, and optimized OpenCV baselines under varying illumination and reflections. Furthermore, a bead-width-based offset compensation strategy reduced the effective risk of under-deposition by approximately 96.8%. The proposed approach enables low-cost, streamlined pre-process planning without 3D scanners for practical DED repair.