Human in the Loop Visual Inspection for Safety Critical Systems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Visual inspection is a crucial quality assurance process across many manufacturing industries. While many companies now employ AI-based systems, they face a significant challenge, particularly in safety-critical domains. The outcomes of these systems are often complex and difficult to comprehend, making them less reliable and trustworthy. To address this challenge, we propose a human-in-the-loop framework that enables the safe and efficient implementation of machine learning in visual inspection tasks, even when starting from scratch. Our framework leverages three complementary safety mechanisms—uncertainty detection, explainability, and model diversity—to enhance both accuracy and system safety while minimizing manual effort. Using the example of steel surface inspection, we demonstrate how a self-accelerating process of data collection can arise, where model performance improves while manual effort decreases progressively. Based on that, we create a system with various safety mechanisms where every wrong prediction is identified automatically. We provide concrete recommendations and an open-source code base to facilitate reproducibility and adaptation to diverse industrial contexts.

Article activity feed