A Hybrid Feature Hashing Approach with Temporal Validation for Efficient Detection of Frame Duplication in Video Forgery
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study presented a hybrid framework for detecting video duplication forgery by leveraging deep learning, hashing techniques, and motion analysis. Two novel approaches were explored to evaluate the trade-off between processing speed and detection accuracy. The first approach employed a window-based strategy, using ResNet-50 for feature extraction and MD5 hashing to identify potential duplicates, yielding fast execution times. The second approach improved upon this by implementing a two-stage validation process: hash-based candidate selection, followed by temporal and structural validation using a gap threshold and optical flow analysis, thereby significantly enhancing detection accuracy. Experimental evaluations on the SULFA, TDTVD, and Fadl datasets using the proposed group-based approach demonstrate key quantitative outcomes, achieving an accuracy of 100%, a precision of 99.91%, and an F1-score of 100% on average using MD5 hashing, surpassing QPCET hashing’s 98.2% accuracy, a precision of 94.6% and an F1-score of % 90.8%. This ensures reliable detection of duplicate frames and yields a 65.85% reduction in execution time relative to frame-wise processing while maintaining detection accuracy. This paper’s contributions include a robust, scalable video pipeline that achieves state-of-the-art performance on benchmark datasets, a 25-frame-window recommendation that balances accuracy and speed, and validation of MD5’s effectiveness relative to QPCET for forgery detection. These findings advance automated video authentication, with implications for digital forensics, surveillance, and the verification of social media content.