<em>secRNNlong</em>: Malicious Code Classification Using RNNs with LSTM and Transformers for Improved Cybersecurity

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rising complexity of malware and the exponential growth in the android applications, including obfuscation and polymorphic behaviour has rendered conventional detection techniques increasingly ineffective. The work proposed here presents a robust and adaptive framework and comparative study of two deep learning paradigms called Recurrent Neural Networks (RNN) with Long Short-Term Memory (LSTM) units and Transformers based architectures for effective malware detection. We constructed and trained both models on the EMBER-2018 dataset, using opcode sequences and bytecode representations extracted from Android executables. The RNN-LSTM model captures temporal dependencies and sequential patterns. While the Transformer model leverages self-attention to learn long-range relationships and global context. To address the less generalizable or less precise centres, we also utilize secRNNlong, a technically secured LSTM architecture for robust static malware detection. Our experiments show that, in practice, secRNNlong offers a balance between interpretability and performance, achieving better performance than the baseline RNNs in the major detection metrics. Experimental results indicate that while RNN-LSTM models are lightweight and efficient, Transformer models achieve higher detection performance and generalize better to unseen malware variants.

Article activity feed