Metrological Reformation for Continuous Hand Gesture Recognition: Framework, Benchmarks, and Validation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Continuous hand gesture recognition (CHGR) suffers from a critical gap between benchmark performance and real-world deployment, driven by flawed evaluation metrics and poor non-gesture modeling. This work addresses these issues with three novel technical contributions and two new benchmarks for CHGR: (1) a formal analysis of frame-based metric bias in CHGR, showing rank inversion and 47 percentage point (pp) duration-dependent variation in Jaccard Index (JI); (2) GeNGA , a diffusion-based generative non-gesture augmentation framework that reduces false positive rates (FPR) by 47.3% on naturalistic data; (3) the Dictionary Difficulty Index (DDI) , a quantitative metric explaining 73% of cross-benchmark CHGR performance variance; and (4) the Continuous Gesture Metrology Framework (CGMF) , a standardized evaluation protocol for CHGR. We also release two CGMF-compliant benchmarks: MIX-HAND200 (207 subjects, 100h non-gesture data) and AutoGest-Drive (60 drivers, 45h automotive gesticulation). Empirical re-evaluation of 3 state-of-the-art (SOTA) CHGR methods on 4 benchmarks shows reported JI overstates true event detection rate (EDR) by 9–20.5 pp, with SOTA EDR at 60–73% under validated thresholds. GeNGA and CGMF together reduce real-world FPR by 60.8%, demonstrating transformative improvements for CHGR deployment.

Article activity feed