Evaluating the Efficacy of Deep Learning Models for Identifying Manipulated Medical Fundus Images
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
(1) Background: The misuse of transformation technology using medical images is a critical problem that can endanger patients’ lives, and detecting manipulation via a deep learning model is essential to address issues of manipulated medical images that may arise in the healthcare field. (2) Methods: The dataset was divided into a real fundus dataset and a manipulated dataset. The fundus image manipulation detection model uses a deep learning model based on a Convolution Neural Network (CNN) structure that applies a concatenate operation for fast computation speed and reduced loss of input image weights. (3) Results: For real data, the model achieved an average sensitivity of 0.98, precision of 1.00, F1-score of 0.99, and AUC of 0.988. For manipulated data, the model recorded sensitivity of 1.00, precision of 0.84, F1-score of 0.92, and AUC of 0.988. Comparatively, five ophthalmologists achieved lower average scores on manipulated data: sensitivity of 0.71, precision of 0.61, F1-score of 0.65, and AUC of 0.822. (4) Conclusions: This study presents the possibility of addressing and preventing problems caused by manipulated medical images in the healthcare field. The proposed approach for detecting manipulated fundus images through a deep learning model demonstrates higher performance than that of ophthalmologists, making it an effective method.