Task Offloading in IOT Edge Computing Using Deep Reinforcement Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As the IoT continues to grow, the need for efficient and effective task processing at the network’s edge becomes crucial. This thesis delves into leveraging DRL to enhance real-time task offloading in IoT edge computing, aiming to optimise resource utilisation and minimise latency. A novel approach is presented, combining BLSTM networks for predicting load and the A2C algorithm for making dynamic offloading decisions. This framework anticipates the load on MEC servers and strategically offloads tasks to balance computational demands. The research highlights significant contributions, including the implementation of BLSTM for precise load prediction by understanding temporal task request patterns and the use of the A2C algorithm to dynamically optimise offloading decisions based on these predictions and the current system state. Comprehensive experiments show that the proposed model surpasses traditional strategies, such as Deep Q-Networks (DQN), in maximising rewards and ensuring system stability. These findings underscore the potential of DR-based methods to significantly enhance IoT edge computing efficiency by achieving balanced and responsive task offloading, thus promoting the development of more intelligent and resilient IoT systems.

Article activity feed