A Hybrid DMPC–DQN Framework for Adaptive and Low-Latency Control in Distributed Software-Defined Networks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Distributed Software-Defined Networking (SDN) architectures alleviate the scalability limitations of centralized controllers but still suffer from increased response time, coordination overhead, and instability under dynamic traffic conditions. This paper introduces a hybrid control framework that integrates Distributed Model Predictive Control (DMPC) with a Deep Q-Network (DQN) agent to enable predictive, adaptive, and low-latency decision-making in distributed SDN environments. In the proposed framework, DMPC performs short-horizon, constraint-aware optimization based on local and neighboring traffic predictions, proactively mitigating overload and signaling delays. Complementarily, the DQN agent continuously learns system behaviors and adjusts key DMPC parameters—such as prediction horizon, cost weights, and coordination strength—to maintain high performance under no stationary traffic conditions. By combining the anticipatory strengths of DMPC with the long-term adaptability of reinforcement learning, the hybrid DMPC–DQN controller significantly reduces average response time, improves tail latency, and stabilizes inter-controller coordination. Extensive evaluations under diverse traffic scenarios demonstrate reductions of up to 40% in average response time, 38% in 95th-percentile latency, and 30% in control-plane overhead compared to reactive SDN, pure DMPC, pure RL, and centralized MPC baselines. These results highlight the potential of hybrid predictive–learning architectures for next-generation distributed SDN systems