Self-MCKD : Enhancing the Effectiveness and Efficiency of Knowledge Transfer in Malware Classification

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As malware continues to evolve, AI-based malware classification methods have shown significant promise in improving malware classification performance. However, these methods lead to a substantial increase in computational complexity and the number of parameters, increasing computational cost during the training process. Moreover, the maintenance cost of these methods also increase, as frequent retraining and transfer learning are required to keep pace with evolving malware variants. In this paper, we propose an efficient knowledge distillation technique for AI-based malware classification methods, called Self-MCKD. Self-MCKD transfers output logits that are separated into target class and non-target classes. With the separation of output logits, Self-MCKD enables efficient knowledge transfer by assigning weighted importance to the target class and non-target classes. Also, Self-MCKD utilizes small and shallow AI-based malware classification methods as both the teacher and student models to overcome the need for using large and deep methods as the teacher model. From the experimental results under various malware datasets, we show that Self-MCKD outperforms traditional knowledge distillation techniques in terms of the effectiveness and efficiency of malware classification.

Article activity feed