Unsupervised Segmentation and Alignment of Multi‑Demonstration Trajectories via Multi‑Feature Saliency and Duration‑Explicit HSMMs

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Learning from demonstration with multiple executions must contend with time warping, sensor noise, and alternating quasi stationary and transition phases. We propose a la-bel free pipeline that couples unsupervised segmentation, duration explicit alignment, and probabilistic encoding. A dimensionless multi feature saliency (velocity, acceleration, curvature, direction change rate) yields scale robust keyframes via persistent peak–valley pairs and non maximum suppression. A hidden semi Markov model (HSMM) with ex-plicit duration distributions is jointly trained across demonstrations to align trajectories on a shared semantic time base. Segment level probabilistic motion models (GMM/GMR or ProMP, optionally combined with DMP) produce mean trajectories with calibrated co-variances, directly interfacing with constrained planners. Feature weights are tuned without labels by minimizing cross demonstration structural dispersion on the simplex via CMA ES. Across UAV flight, autonomous driving, and robotic manipulation, the method reduces phase boundary dispersion by 31% on UAV Sim and by 30–36% under monotone time warps, noise, and missing data (vs. HMM); improves the sparsity–fidelity trade off (higher time compression at comparable reconstruction error) with lower jerk; and attains nominal 2σ coverage (94–96%), indicating well calibrated uncertainty. Abla-tions attribute the gains to persistence plus NMS, weight self calibration, and dura-tion explicit alignment. The framework is scale aware and computationally practical, and its uncertainty outputs feed directly into MPC/OMPL for risk aware execution.

Article activity feed