Reinforcement Learning for Multi-Metric QoS Optimization in 5G/6G Networks with Shannon Capacity Integration
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study investigates the use of reinforcement learning (RL) techniques to optimize Quality of Service (QoS) parameters in both 5G and 6G wireless networks. A multi-phase experimental framework was employed to evaluate key QoS metrics—including latency, jitter, packet loss, throughput, signal-to-noise ratio (SNR), and bit error rate (BER)—across both generations. In the first phase, an RL agent was trained in a 5G environment, achieving notable improvements in latency reduction, jitter control, and throughput enhancement. Comparative analysis in the second phase demonstrated that 6G networks consistently outperformed 5G, yielding higher RL rewards, lower packet loss, and reduced jitter and latency. In the final phase, Shannon Capacity theory was integrated into the RL model, further enhancing transmission reliability and signal quality in the 6G context. Additional testing with video streaming scenarios confirmed 6G’s superior capability in supporting real-time, high-reliability applications. Overall, the findings indicate that 6G networks, when optimized with RL, provide a more intelligent, robust, and efficient solution for future wireless communication systems.