Learning-Assisted Multi-IMU Proprioceptive State Estimation for Quadruped Robots
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper presents a learning-assisted approach for state estimation of quadruped robots using observations of proprioceptive sensors, including multiple inertial measurement units (IMUs). Specifically, one body IMU and four additional IMUs attached to each calf link of the robot are used for sensing the dynamics of the body and legs, in addition to joint encoders. The extended Kalman filter (KF) is employed to fuse sensor data to estimate the robot’s states in the world frame and enhance the convergence of the extended KF (EKF). To circumvent the requirements for the measurements from the motion capture (mocap) system or other vision systems, the right-invariant EKF (RI-EKF) is extended to employ the foot IMU measurements for enhanced state estimation, and a learning-based approach is presented to estimate the vision system measurements for the EKF. One-dimensional convolutional neural networks (CNN) are leveraged to estimate required measurements using only the available proprioception data. Experiments on real data from a quadruped robot demonstrate that proprioception can be sufficient for state estimation. The proposed learning-assisted approach, which does not rely on data from vision systems, achieves competitive accuracy compared to EKF using mocap measurements and lower estimation errors than RI-EKF using multi-IMU measurements.