RDMS: Reverse Distillation with Multiple Students of Different Scales for Anomaly Detection

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Unsupervised anomaly detection is a critical task in computer vision, often approached as a one-class classification problem. Knowledge distillation has shown promising results in this area, particularly with the emergence of reverse distillation networks using encoder-decoder architectures, which further enhance anomaly detection accuracy. In this study, we propose a novel reverse knowledge distillation network with multiple scale student decoders, called RDMS. RDMS integrates a pretrained teacher encoding module, a trainable multi-level feature fusion connection module (MFFCM), and a student decoding module composed of three mutually independent decoders. Each student decoder is tasked with distilling a specific feature from the teacher encoder, effectively mitigating the overfitting issue that occurs when the student and teacher structures are similar or identical. Our model achieves an average of 99.3% image-level AUROC and 98.34% pixel-level AUROC on the publicly available dataset MVTec-AD, and also achieves state-of-the-art performance on the more challenging BTAD dataset. The proposed RDMS model demonstrates high accuracy for anomaly detection and local-ization, highlighting the potential of multi-student reverse distillation for improving unsupervised anomaly detection capabilities. Source code is available at https://github.com/zihengchen777/RDMS

Article activity feed