Deep Learning-Based Oral Cancer Screening via Smartphone Imagery and Real-Time Web Interface

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Oral cancer is a significant public-health issue and the existing methods of its detection are not as simple or fast as to be applicable by a wide population, particularly by those living in underserved communities. Our team has a proposed solution to address this problem by involving the concept of Convolutional Neural Networks (CNNs) to classify smartphone images into normal or malignant categorization in real time. We defined the model training to use a set of 1071 smartphone camera photos which were then pre-processed to convert them to HSV, normalize and resample the images. The CNN had an accuracy of 94.29 %, precision of 95.45%, recall/sensitivity of 93.33%, and F1-score of 94.38 after training. The overall predictive performance evaluation was calculated with an area under the receiver operating characteristic curve (AUC) of 0.99 with an average inference time of less than 5 sec so the clinicians or patients can send their images and get results in a short time. In contrast to other available methods, the EfficientNetB0 model is quicker and computationally less demanding, which is more suitable to be used on a mobile platform. The primary drawbacks which were big obstacles at the beginning of the project were the variance in image quality, the absence of annotated data, and changing the dataset to a larger and more diverse one, along with the application of advanced preprocessing enhanced the performance of models. The next step will be to focus on a large-scale clinical validation and additional model improvement. To sum up, the system is an AI-based method of scalable, cheap, and fast front-end screening that has a potential to significantly improve the outcomes of oral-cancer by early identification.

Article activity feed