A deep reinforcement learning-based task offloading algorithm for cell-free architecture CFMADRL: Co-optimization of delay-energy sensitive tasks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
With the continuous proliferation of wireless devices, the growing pressure on wireless channels has resulted in degraded network quality for edge users and increased energy consumption of mobile devices. To enhance user experience and extend device battery life, this paper proposes a multi-agent deep reinforcement learning-based task offloading algorithm, named Cell-Free Multi-Agent Deep Reinforcement Learning (CFMADRL). Under a cell-free architecture, the proposed model introduces two representative types of tasks, namely delay-sensitive and energy-sensitive tasks, and designs a multidimensional task classification mechanism that integrates task complexity, device status, and delay/energy pressure metrics. To effectively handle heterogeneous tasks, a dual-agent collaborative framework is constructed, where each agent is dedicated to a specific optimization objective: minimizing task completion delay or reducing energy consumption, through global task offloading and resource scheduling. Furthermore, CFMADRL incorporates a user-driven access point (AP) cooperative offloading mechanism and a hierarchical optimization strategy. The system optimization problem is decomposed into two subproblems: computational resource allocation and task offloading/power control, which are addressed using convex optimization and multi-agent deep reinforcement learning, respectively. Simulation results validate that the proposed algorithm significantly outperforms existing benchmark methods in terms of reducing system delay and energy consumption, and demonstrates strong robustness and adaptability in dynamic edge computing environments.