Self-calibrated mutual learning for fine-grained image recognition

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Knowledge distillation has demonstrated its effectiveness in fine-grained image classification. While recent knowledge distillation methods typically improve classification accuracy by increasing the number of models or exploiting intermediate representations, they often require extensive training time or architectural modifications and insufficiently consider the geometry of output distributions in fine-grained regimes. To address this issue, this paper proposes a method for enhancing fine-grained classification accuracy based on the integration of mutual learning and self-distillation. This method achieves improved accuracy by transforming the geometry of output distributions through the fusion of cross-model consistency and self-calibration. Cross-model consistency enhances generalizability by sharing peer knowledge, and self-calibration strengthens intra-class similarities by incurring overconfidence suppression. The proposed method is validated on three benchmark datasets across multiple backbone architectures, compared with existing methods. Experimental results demonstrate that the proposed method improves accuracy over existing methods and exhibits a complementary effect beyond a linear combination of mutual learning and self-distillation.

Article activity feed