UAV Task Scheduling and Resource Allocation for Data Collection Applications: a Hierarchical Reinforcement Learning Approach

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In recent years, the utilization of unmanned aerial vehicles (UAVs) has surged in a variety of applications, including weather monitoring, emergency search and rescue operations, and smart agriculture. In these applications, UAVs are instrumental in executing data collection tasks. However, due to the scarcity of resources such as battery capacity, the duration of work is limited, necessitating the optimization of UAV trajectories and resource allocation. In this paper, we introduce a hierarchical task scheduling and resource allocation scheme for UAV data collection tasks, which incorporates the deep reinforcement learning (DRL) technology into a two-layer training framework. The continuous actions, including UAV trajectories, UAV communication power, and UAV CPU main frequency, are first optimized in the lower layer, while the upper one concentrates on the discrete action space for the target allocation. In particular, the well-trained lower-layer network is integrated into the upper layer, facilitating rapid global reward feedback between the two-layer network and accelerating the training process of the upper-layer network. Experimental results demonstrate that the proposed approach achieves superior performance than the baseline methods in terms of the amount of collected data as well as UAV energy consumption.

Article activity feed