Computation offloading and scheduling in Mobile Edge Networks and SDN

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The development of recent new applications, such as augmented reality, self-driving, and different cognitive applications, has resulted in an increase in the number of computation-intensive and data-intensive jobs that are sensitive to delays. It is anticipated that mobile edge computing on ultra-dense networks would prove to be an efficient solution for satisfying the need for low latency resources. On the other hand, the scattered computing resource in the edge cloud and the energy dynamics in the battery of mobile devices make it difficult to offload work for users. To improve IoT support, mobile edge computing (MEC) may provide nearby enhanced processing capability. At the reachable access point (AP), tasks may be outsourced and completed by the Internet of Things (IoT) devices. To meet the demanding latency requirements of IoT applications, this computing paradigm brings computer resources closer to the IoT devices. On the other hand, load balancing is seldom addressed in previous MEC research that concentrate on offloading tasks and allocating resources. To that end, there is an immediate need for MEC-aware task offloading techniques for IoT devices that take load balancing into account. Because SDN's rule-based forwarding policy may assist in determining the most appropriate offloading channel and AP for doing the computation, software-defined networking (SDN) technology is used in this article to solve this problem. So, in order to make the suggested SDN-assisted MEC design as responsive as possible, we create a optimization issue to achieve this goal. The proposed approach utilizes Deep Reinforcement Learning (DRL) techniques to estimate the best solution in polynomial time. The effectiveness of the suggested method has been tested via simulation. According to the findings of the simulations, our method has the lowest reaction latency compared to the others.

Article activity feed