A Comprehensive Review of Intelligent Navigation of Mobile Robots Using Reinforcement Learning with A Comparative Analysis of a modified Q-Learning Method and DQN in Simulated Gym Environment

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Purpose: The field of autonomous mobile robots (AMRs) has experienced significant growth in recent years, propelled by advancements in autonomous driving and unmanned aerial vehicles (UAVs). The integration of intelligence into robotic systems necessitates addressing various research challenges, with naviga- tion emerging as a pivotal aspect of mobile robotics. This paper explores the three fundamental questions central to the navigation problem: localization (determin- ing the robot’s position), mapping (creating a representation of the environment), and path planning (determining the optimal route to the destination). The pro- posed solution to the mobile robot navigation problem involves the seamless integration of these three foundational navigation components. Methods: Our comparative analysis between the Q-learning modified method and a deep Q-network (DQN) in simulated gym pathfinding tasks reveals the efficacy of this approach. The modified Q-learning algorithm consistently outperforms DQN, demonstrating its superior ability to navigate complex environments and achieve optimal solutions. The transition from a definite environment to a simulated gym environment serves as a valuable validation of the method’s applicability in real-world scenarios. By rigorously evaluating our algorithm in a controlled setting, we can ensure its robustness and effectiveness across a broader range of applications. Results: In essence, our study establishes the modified Q-learning algorithm as a promising new approach to addressing the exploration-exploitation dilemma in reinforcement learning. Its superior performance in simulated gym environments suggests its potential for real-world applications in various domains, including robotics, autonomous navigation, and game development. Conclusion: The paper furnishes a comprehensive overview of research on autonomous mobile robot navigation. It begins with a succinct introduction to the diverse facets of navigation, followed by an examination of the roles of machine learning and reinforcement learning in the realm of mobile robotics. Subsequently, the paper delves into various path planning techniques. In the end, this paper presents a comparative analysis of two path planning methods for mobile robots: Q-learning with an enhanced exploration strategy and Deep Q-Network (DQN). Through a comprehensive simulation study in a gym environment, the superior performance of the proposed Q-learning approach is firmly established.

Article activity feed