A Dual-Modal Prompt Framework for LLM-Assisted Robotic Motion Planning and Control Optimization
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The growing demands of Industry 4.0 require advanced robotic systems, characterized by efficient, precise, and robust motion planning and control. The benefits offered by traditional robotic motion planning and control optimization methods often include high cost of design, limited ability to generalize due to tuning parameters manually, and poor interpretability due to complex mathematical models. In this paper, we propose a dual-modal prompt engineering approach, known as Ours, that combines instantiation for reasoning with recent advancements in (neural) large language models alongside structured numerical data from robotic systems. The approach interprets both natural Language task descriptions and quantitative parameters of the robot in order to plan optimal motions and control strategies. A process of Closed-Loop Optimization iteratively improves results based upon feedback from a simulated environment. Results from evaluating on an industrial robotic arm show that Our can achieve faster optimization and better control precision, stability, and robustness under changing conditions than traditional design baselines and in single modal comparison. We highlight the new possibilities of using (neural) large language models as intelligent actors.