Multi-Agent Deep Reinforcement Learning for Cooperative Path Planning of UAV Swarms

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Collaborative path planning for UAV swarms in dynamic uncertain environments faces dual challenges of partial observability and cooperation mechanism design. The decentralized decision-making nature of multi-agent reinforcement learning (MARL) establishes a novel theoretical framework for autonomous coordination of heterogeneous UAV swarms under partially observable conditions. This paper proposes a reciprocity reward-enhanced multi-agent deep reinforcement learning method (PMI-MADDPG) that optimizes UAV cooperative decision-making through a centralized training with decentralized execution framework. By constructing a partially observable Markov decision process (POMDP) model, we design continuous action spaces considering UAV kinematic constraints, and quantify inter-agent state dependencies using pointwise mutual information. A novel cooperative coefficient estimation network is introduced to dynamically balance individual rewards and swarm-level objectives. Simulation results demonstrate that compared to conventional multi-agent methods, PMI-MADDPG shows significant advantages in task reward acquisition and network convergence efficiency, while revealing the impact of UAV quantity on system stability. The proposed approach provides an innovative solution for cooperative path planning tasks of UAV swarms in complex environments.

Article activity feed