Latency-Aware and Energy-Efficient Task Offloading in IoT and Cloud Systems with DQN Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The exponential growth of the Internet of Things (IoT) has generated significant challenges related to computing capacity and energy consumption. IoT devices produce large amounts of combined data and demand processing tasks, often leading to high latency and redundant energy consumption. Task offloading has emerged as an applicable solution; however, existing strategies often neglect optimization of latency and energy consumption. This paper introduces a novel task-offloading approach based on deep Q-network (DQN) learning, designed to intelligently and dynamically balance these important metrics. The proposed framework continuously optimizes real-time task offloading decisions by deploying the adaptive learning capabilities of DQN, thus significantly improving latency and energy consumption. To further enhance performance, the framework integrates optical networks into IoT-fog-cloud architecture, leveraging their high-bandwidth and low-latency capabilities. This integration enables tasks to be distributed and processed more efficiently, especially in data-intensive IoT applications. In addition, we present a comparative analysis of the proposed DQN algorithm and the optimal strategy. Through extensive simulations, we demonstrate the effectiveness of the proposed DQN framework in different IoT scenarios over the BAT and DJA approaches, enhancing energy consumption and latency by 35%, 50%, 30%, and 40%, respectively. Our results further demonstrate the importance of selecting an appropriate strategy according to the specific requirements of the IoT application, particularly in terms of environmental stability and performance requirements.