Machine Learning Workflows for Motion Capture-driven Biomechanical Modelling
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Biomechanical models driven by motion capture (MoCap) offer unprecedented insight into musculoskeletal (MSK) function and aid clinical decision-making. However, traditional MSK models are computationally expensive, laborious to implement, and require meticulously curated inputs. Such models are being complemented by machine learning (ML) methods for user-friendly, real-time predictions, which, however, have often lacked rigorous implementation and assessment. Choosing a fit-for-purpose ML technique is fraught with trade-offs. Here, we show the comparative implementation of nine ML techniques on relatively understudied human upper-extremity MSK modelling from optical MoCap input data (non-invasive gold standard). We identified and investigated model selection and accuracy, generalisability, robustness (to instrumentation errors, soft-tissue artefacts, and anatomical landmark misplacement---inherent to optical MoCap systems), model complexity, transferability (from intact-limbed participants to ‘mimicked’ transradial prosthesis usage), and interpretability. We also undertook the first assessment of data sufficiency using learning curves and the carbon footprint of training/inference. We found convLSTM to be the optimal ML technique, which efficiently learns the spatial and temporal aspects of MoCap data, while random forest offers a computationally-efficient alternative with minimal accuracy trade-off. This novel holistic characterisation helps lay the methodological foundation towards better deployment (via increased interpretability and robustness) of ML pipelines in biomechanical studies. Finally, we provide best practices and a reporting guideline (LearnABLE) for systematic implementation and transparent reporting of ML techniques, aiding their development to better complement and improve traditional MoCap-driven biomechanical modelling.