Automatic pain identification classification in older patients with hip fracture based on multi-modal information fusion

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objective : Considering the disadvantages of uni-modal pain recognition, this study aimed to develop a pain recognition classification system for older patients with hip fractures using multi-modal information fusion. Methods : Based on the Residual Network 50 automatic recognition classification system for pain expression, this study used the VGGish network and the bi-directional long short-term memory (BiLSTM) network to establish a pain speech recognition classification system, and the channel attention mechanism was used for optimization. Finally, a weighted-sum mechanism was used to integrate the two uni-modal pain recognition classification systems to form a multi-modal pain recognition classification system. A self-built multi-modal pain database was used for model training and validation, and the training set was allocated in an 8:2 ratio. The final model was tested on the BioVid heat pain dataset. Results : The VGGish model optimized by a LSTM network and the channel attention mechanism were trained on a hip fracture pain dataset, and the accuracy of the model was maintained at 80% after 500 iterations. The model was tested in BioVid heat pain database, Pain 2 to 4 grades, and the confusion matrix test had an accuracy of 85% for Pain 4 grade. Conclusion : This is the first study to establish an automatic multi-modal pain expression recognition classification system based on facial expression and audio information, and to clinically verify the feasibility of this system.

Article activity feed