Uncertainty Quantification Based on Block Masking of Test Images

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper investigates the extension of the block-scaling approach—originally developed for estimating classifier accuracy—to uncertainty quantification in image classification tasks, where predictions accompanied by high confidence scores are generally more reliable. The proposed method involves applying a sliding mask filled with noise pixels to occlude portions of the input image and repeatedly classifying the masked images. By aggregating predictions across input variants and selecting the class with the highest vote count, a confidence score is derived for each image. To evaluate its effectiveness, we conducted experiments comparing the proposed method to MC dropout and a vanilla baseline using image datasets of varying sizes and levels of distortion. The results indicate that while the proposed approach does not consistently outperform alternatives under standard (in-distribution) conditions, it demonstrates clear advantages when applied to distorted and out-of-distribution samples. Moreover, combining the proposed method with MC dropout yields further improvements in both predictive performance and calibration quality in these more challenging scenarios.

Article activity feed