Dynamic Ego-Centric Graph-Based Reinforcement Learning for Autonomous Quadrotor Navigation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Efficient and safe autonomous navigation of quadrotors in cluttered and partially observable environments remains a challenging problem due to the constraints imposed by quadrotor dynamics, high-dimensional sensory inputs, and complex obstacle configurations. This paper proposes a hybrid learning framework that integrates Graph Neural Networks (GNNs) with Deep Reinforcement Learning (DRL) to enable adaptive and collision-free quadrotor navigation. The environment is represented as a dynamic, ego-centric graph, where nodes encode local spatial regions or obstacles and edges capture traversability relationships. A GNN-based encoder extracts structured, context-aware embeddings from this representation, which are fused with the quadrotor's dynamic state and provided as input to a Proximal Policy Optimization (PPO) agent for continuous control. The proposed framework is evaluated in a PyBullet simulation environment under identical conditions against standard PPO baselines using flat, unstructured vector inputs. Experimental results demonstrate that incorporating graph-based environmental reasoning leads to substantial and consistent improvements in navigation performance, including a 27% increase in success rate (from 65% to 92%), higher cumulative rewards, smoother trajectories, and a 65% reduction in reward variance. These quantitative gains, coupled with a 45% improvement in obstacle clearance, highlight the effectiveness of structured relational representations in enhancing the robustness, efficiency, and stability of learning-based aerial navigation policies.

Article activity feed