DeepStego: Privacy-Preserving Natural Language Steganography Using Large Language Models and Advanced Neural Architectures

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Modern linguistic steganography faces the fundamental challenge of balancing embedding capacity with detection resistance, particularly against advanced AI-based steganalysis. This paper presents DeepStego, a novel steganographic system leveraging GPT-4-omni's language modeling capabilities for secure information hiding in text. Our approach combines dynamic synonym generation with semantic-aware embedding to achieve superior detection resistance while maintaining text naturalness. Through comprehensive experimentation with 8,662 samples, DeepStego demonstrates significantly lower detection rates (0.635-0.655) compared to existing methods (0.838-0.911) across multiple state-of-the-art steganalysis techniques. DeepStego supports embedding capacities up to 4 bits per word while maintaining strong detection resistance and semantic coherence. The system shows superior scalability with a factor of 1.29, compared to 1.66-1.73 for existing methods. Our evaluation demonstrates 100% message recovery accuracy and significant improvements in text quality preservation, with readability scores of 25.46 versus 22.34-24.56 for competing approaches. These results establish DeepStego as a significant advancement in practical steganographic applications, particularly suitable for scenarios requiring secure covert communication with high embedding capacity.

Article activity feed