QUINSIM-Vision: Calibrated Uncertainty Quantification for Safety-Critical Computer Vision
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Deep learning models achieve high accuracy in computer vision tasks but produce unreliable confidence estimates on out-of-distribution (OOD) inputs, limiting deployment in safety-critical systems. This paper presents QUINSIM-Vision, a framework integrating per-pixel uncertainty quantification, uncertainty-guided saliency mapping, and OOD detection for calibrated perception. The method extends Monte Carlo dropout and evidential deep learning to spatial feature maps, producing confidence estimates for detected objects and segmented regions. Validation across autonomous driving (KITTI, nuScenes), industrial safety monitoring, and defense threat detection demonstrates 20-30% improvement in OOD detection AUC, maintaining 93% mean average precision under distribution shift versus 78% for baseline models. The framework operates model-agnostically across convolutional and transformer architectures with real-time performance (30 FPS) on edge hardware. Results show enhanced trustworthiness ratings from domain specialists and 82% reduction in false engagement rates for defense applications.