Learning-based Path Planning Techniques for Autonomous Unmanned Aerial Vehicles
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Unmanned aerial vehicles (UAVs) have a great deal of importance nowadays due to their wide range of uses and capabilities. With the most unique feature of an UAV being to operate remotely on its own, this requires the UAV to generate paths that satisfy its mission. In this paper, several path planning techniques are discussed alongside the emerging field of Reinforcement learning. In this paper, Reinforcement learning is used to train an Artificial Intelligence (AI) model capable of planning paths in 3-D space in real-time while trying to optimize path length and energy while avoid obstacles. Paths stability were validated using Monte Carlo method with over 1 million iterations. Results showed that the RL approach was able to find a stable policy requiring 4.5 to 9.6 ms per way point; this opens the door for this algorithm to be used real time applications. Also, the adopted algorithm could be trained of different observations spaces which represents the use of different raw sensors readings making it an advantage in terms of sensor pre-processing and time consumption. Furthermore, Results showed that success rate reached a max of 92 % in simulation while APF reached a success rate of 62 % in the same environment. However, there is still room for improvements in terms of obstacle avoidance to ensure safer path planning.