Taekwondo training motion capture technology based on improved Transformer-GCN

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

To address the issues of insufficient motion capture accuracy, poor scoring consistency, and delayed feedback in Taekwondo training, a Taekwondo motion capture model combining an improved Transform and graph convolutional network is developed. The model introduces a multi-modal skeleton feature fusion mechanism, and combines a sliding window scoring structure with a temporal modeling module to achieve precise capture and rapid scoring feedback for continuous Taekwondo movements. Validated on two standard open datasets in the field, the results show that with a residual link weight coefficient set to 0.5 and a multi-scale attention fusion coefficient set to 0.6, the model achieves a scoring consistency of up to 91.9%, an F1 score improved to 0.94, and a feedback delay minimized to 38.6ms. Further simulation experiments demonstrate that the model maintains an average pose reconstruction error as low as 2.91 in four typical training environments. In the recognition and scoring of four classic Taekwondo movements, the median of the inter-score variance is as low as 0.042, outperforming the three existing mainstream models. The study also verifies the model's superior key joint recognition capabilities under conditions of multiple person interference, occlusion, and complex movements through real teaching video testing. In summary, the proposed Taekwondo motion capture model combining an improved Transform and graph convolutional network demonstrates superior scoring accuracy and structural recognition capabilities in complex training environments, providing a feasible path and technical support for the construction of intelligent Taekwondo teaching and motion evaluation systems.

Article activity feed