Deep learning-based stride segmentation with wearable sensors: effects of data quantity, sensor location, and task
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Accurate stride segmentation from wearable sensors is foundational for digital gait assessment tools, yet systematic evaluations of deep learning approaches across varied real-world mobility tasks remain limited. We developed and assessed Temporal Convolutional Network (TCN) models for stride segmentation using data from 121 older adults with and without Parkinson’s disease, specifically evaluating how performance varies with model development data quantity, sensor location, and movement complexity. Using a fixed-size test set of 40 participants, we found that lower limb sensors achieved F1 scores above 95% during walking with just 5–10 training participants, but performance declined substantially during more complex movements such as 180° turns. Foot-mounted sensors maintained robust performance across tasks (F1: 99.3% walking, 96.7% turning, 88.4% stationary and transitional movements), while wrist sensors showed marked degradation (F1: 88.4% walking, 72.3% turning, 50.0% stationary and transitional movements). Our findings demonstrate that it is important to tailor performance testing for digital gait assessment tools to both sensor location and use case, as performance achieved during controlled walking may not generalize to complex movements in daily life.