Adaptive Actor–Critic Optimal Tracking Control for a Class of High-Order Nonlinear Systems with Partially Unknown Dynamics

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Optimal tracking control for high-order partially unknown nonlinear systems poses significant challenges, particularly in deriving tractable solutions without requiring persistent excitation (PE) conditions or precise system models. This study develops an adaptive optimal tracking control law using neural network (NN)-based reinforcement learning (RL) for high-order partially unknown nonlinear systems. By designing a cost function associated with the sliding mode variable (SMV), the original tracking control problem is equivalently transformed into solving the optimal control problem related to the tracking Hamilton–Jacobi–Bellman (HJB) equation. Since the analytical solution of the HJB equation is generally intractable, we employ a policy iteration algorithm derived from the HJB equation, where both the partial derivative of the optimal tracking cost function and the optimal control law are approximated by NNs. The proposed RL framework achieves simplification through actor–critic training laws derived under the condition that a simple function is zero. Finally, both a numerical example and a single-link robotic arm application are provided to demonstrate the effectiveness and advantages of the proposed adaptive optimal tracking control method.

Article activity feed