Deep Reinforcement Learning based Path Planning with Dynamic Trust Region Optimization for Automotive Application

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Graphical abstract Abstract Multi-robot path planning must adapt to difficult situations, allowing autonomous navigation in both static and dynamic barriers in complicated environments. However, defining the best planning strategies for certain applications remains unsolved. This study focused at three methods for learning complex robotic decision-making principles such as Trust Region Policy Optimization (TRPO), Proximal Policy Optimization (PPO), and Deep Reinforcement Learning (DRL). Furthermore, proposed a novel technique for obstacle avoidance and autonomous navigation called Dynamic Improvement Trust Region Policy Optimization with Covariance Grid Adaptation (DITRPO-CGA). Initially, created the Dynamic Improvement Proximal Policy Optimization with Covariance Grid Adaptation (DIPPO-CGA) based on PPO to assure collision-free policies. Next, developed a DRL technique that integrates DIPPO-CGA, resulting in the DITRPO-CGA algorithm, which improved the flexibility of multi-robot systems in different situations. During training process, DIPPO-CGA is utilized to optimize the multi-robot multi-task policies, ensuring least distance obstacle avoidance and target completion. The proposed DIPPO-CGA algorithm reaches the target within minimum distance. The findings showed that when compared to PPO, TRPO, and DIPPO-CGA, the proposed DITRPO-CGA algorithm achieves a higher convergence rate, faster target achievement and reaches the positions more quickly.

Article activity feed