A 3D Convolutional Neural Network Model for Figure Skating Action Capture and Pose Recognition Based on Spatiotemporal Geometric Theory

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This research addresses the challenging problem of automatic recognition of complex action sequences in figure skating by proposing a specialized 3D convolutional neural network model. First, based on Riemannian geometric theory, we establish a spatiotemporal manifold representation for figure skating actions, derive action invariance theorems, and construct adaptive spatiotemporal convolutional architectures with innovative attention mechanisms tailored for high-speed rotational movements. Second, we design precise capture algorithms for complex rotational, jumping, and artistic performance actions, achieving high-precision motion trajectory extraction. Third, we establish a hierarchical recognition mechanism for technical actions and error detection algorithms based on group-theoretic symmetry, completing multi-level recognition from basic postures to complete routines. Finally, through model accuracy validation, generalization performance testing, and computational efficiency optimization, we establish a comprehensive performance evaluation system. Experimental results demonstrate that the model achieves 94.7% accuracy in standard figure skating action recognition tasks and 91.2% precision in complex jumping action recognition, representing improvements of 12.3% and 15.8% respectively compared to traditional methods. Theoretical analysis proves the algorithm's convergence properties and generalization error bounds, providing important theoretical support and technical foundation for intelligent sports training and automated competition judging, with significant theoretical value and application prospects.

Article activity feed