A Real-Time Motion Deblurring Network for Sports- Related Dynamic Scenes Based on Lightweight GAN
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Motion blur severely degrades visual quality in sports-related dynamic scenes, where fast human movements, complex motion patterns, and unconstrained imaging conditions are commonly encountered. Such degradation not only affects visual perception but also limits the effectiveness of subsequent human motion analysis tasks. Existing motion deblurring methods often struggle to achieve a satisfactory balance between restoration quality and computational efficiency, particularly under real-time constraints. To address these challenges, this paper proposes DFPDeblurGAN, a real-time motion deblurring framework designed for sports-related dynamic scenes. The proposed method is built upon a lightweight generative adversarial network architecture that emphasizes both efficiency and restoration fidelity. Specifically, a MobileNetV2-based generator integrated with dynamic convolution is introduced to reduce model complexity while enhancing adaptability to spatially varying motion blur. In addition, a dual-path feature pyramid network (Dual-FPN) is designed to effectively fuse low-level spatial details and high-level semantic information, enabling accurate reconstruction of motion boundaries and fine-grained structures. Extensive experiments are conducted on public benchmark datasets and sports-related dynamic scenes. Quantitative and qualitative results demonstrate that the proposed method achieves superior performance in terms of restoration quality and inference speed compared with state-of-the-art approaches. The proposed framework provides an effective solution for real-time motion deblurring in sports analytics and other dynamic scene understanding applications.