Deep Learning Algorithms for Human Activity Recognition in Manual Material Handling Tasks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Human Activity Recognition (HAR) is widely used for healthcare, but few works focus on Manual Material Handling (MMH) activities, despite their diffusion and impact on the workers’ health. We propose four Deep Learning algorithms for HAR in MMH: Bidirectional Long Short-Term Memory (BiLSTM), Sparse Denoising Autoencoder (Sp-DAE), Recurrent Sp-DAE; and Recurrent Convolutional Neural Network (RCNN). We explored different hyperparameter combinations to maximize the classification performance (F1-score) using wearable sensors’ data gathered from 14 subjects. We investigated the best three-parameter combinations for each network using the full dataset to select the two best-performing networks, that were then compared using 14 datasets with increasing subject numerosity, 70%-30% split and Leave-One-Subject-Out (LOSO) validation, to evaluate whether they may perform better with a larger dataset. The benchmarking network DeepConvLSTM was tested on the full dataset. BiLSTM performs best in classification and complexity (95.7% 70%-30% split, 90.3% LOSO). RCNN performed similarly (95.9%, 89.2%) with a positive trend with subject numerosity. DeepConvLSTM achieves similar classification performance (95.2%, 90.3%) requiring more than 100 greater Multiply and ACcumulate and Multiplication and Addition operations, which measure the networks’ complexity. The BILSTM and RCNN perform close to DeepConvLSTM while being much computationally lighter, fostering their use in embedded systems. Such lighter algorithms can be readily used in the automatic ergonomic and biomechanical risk assessment systems, enabling personalization of risk assessment and easing the adoption of safety measures in industrial practices involving MMH.