Research on Lightweight dynamic gesture recognition model driven by Meta-learning under Small Sample conditions

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study is dedicated to addressing the challenges of model efficiency and generalization capability in dynamic gesture recognition under small-sample conditions. It proposes a novel and efficient gesture recognition framework that integrates meta-learning strategies with a lightweight network architecture. By combining the meta-learning algorithm based on optimization with lightweight technologies such as Neural Architecture Search (NAS) and Knowledge Distillation (KD), the framework realizes rapid adaptation and accurate recognition of dynamic gestures with a small number of samples. In order to evaluate the performance of the method, systematic experiments are carried out on several standard datasets such as DHG-14, SHREC2017, and FPHA, and fewshot tasks including cross-user differences, perspective changes, background interference and other challenges are constructed. The experimental results are not only compared with traditional models such as 3D-CNN and ST-GCN, but also compared with mainstream meta-learning baselines (such as MAML and ProtoNet). Simulation results show that the proposed lightweight meta-learning model significantly reduces the model complexity and computational overhead while maintaining high recognition accuracy.

Article activity feed