Lightweight Self-Supervised Representation Learning with Knowledge Distillation on Compact Datasets
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Recent studies have demonstrated that self-supervised learning techniques exhibit strong performance in visual representation tasks, particularly in scenarios where labeled data is limited. However, it remains challenging to train deep models when data is limited due to overfitting and a lack of generalization. This paper proposes a novel approach to leveraging knowledge distillation to enhance self-supervised representation learning in resource-constrained settings. The method we use trains an EfficientNet-B0 student model using a MobileNetV2 teacher, which is trained on the STL-10 dataset. We incorporate gradual alpha scheduling and early stopping to ensure the training remains stable and knowledge is preserved. We have observed that our approach, which utilizes different sample sizes, outperforms the student model alone. Our method, Self-Supervised with Knowledge Distillation (SS-KD), achieves 72.71% accuracy on 2500 samples, outperforming several state-of-the-art self-supervised and distillation approaches. When scaled to 5000 samples, our model reaches 83.00% , demonstrating strong scalability with limited data. Available Code at: Self-Supervised-using-Knowledge-Distillation-