Adaptive QoS Management in OneM2M Standard: Machine Learning and Deep Learning for IoT Network Optimization
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The exponential growth of Internet of Things (IoT) deployments has created unprecedented challenges in maintaining optimal Quality of Service (QoS) while managing dynamic resource allocation in IoT middleware platforms. Although OneM2M, standardized and adopted by ETSI (European Telecommunications Standards Institute), provides a robust framework for IoT interoperability, it does not inherently address overload scenarios and dynamic resource optimization challenges. This paper presents a comprehensive machine learning and deep learning framework for intelligent traffic offloading and QoS optimization within the OneM2M standard architecture. The primary objective is to optimize QoS metrics through a dual approach: first, traffic-oriented metrics including Round-Trip Time (RTT) and Success Rate, followed by resource-oriented metrics encompassing CPU and RAM utilization, enabling proactive system adaptation to prevent performance degradation. We developed an autonomous system based on the MAPE-K (Monitor, Analyze, Plan, Execute - Knowledge) framework that continuously monitors and collects critical QoS data from Azure IoT devices streaming real-time sensor data through Event Hubs to OneM2M Common Services Entity (CSE) servers, establishing performance thresholds for normal, acceptable, and critical operational states. The collected QoS data undergoes comprehensive preprocessing to address class imbalance using SMOTE oversampling techniques and feature engineering of composite metrics. Eight machine-learning algorithms were systematically evaluated, including Random Forest, LightGBM, XGBoost, and Support Vector Machines, followed by three deep learning approaches: Convolutional Neural Networks (CNN) for spatial pattern recognition, Long Short-Term Memory (LSTM) networks for temporal dependencies, and Variational Autoencoders (VAE) for feature compression. The optimized models are deployed through a scalable API infrastructure hosted on Hugging Face that orchestrates the entire decision-making process, serving as the central intelligence hub for receiving real-time QoS data, processing it through trained models, and providing autonomous decision recommendations for traffic management and resource allocation. The optimized Random Forest model achieved 96% accuracy with perfect precision for critical state detection, while ensemble approaches reached 98% accuracy, demonstrating RTT reduction of 30–50%, CPU/RAM usage optimization of 20–30%, and maintaining success rates above 90% across diverse traffic patterns (uniform, real-time, and burst scenarios), which proves particularly beneficial for critical IoT domains such as e-health applications where system reliability and responsiveness are paramount. This research fundamentally addresses critical limitations in current OneM2M implementations by introducing autonomous overload management capabilities, establishing a robust foundation for next-generation IoT infrastructure with promising extensions toward federated learning approaches for distributed model training across heterogeneous IoT networks, seamless integration with 5G/6G network slicing for enhanced QoS guarantees, and deployment in industrial IoT environments requiring ultra-low latency and high reliability standards.