Deep CNN Architectures for Autism Spectrum Disorder Detection: Comparative Evalution of LeNet, AlexNet, InceptionV3, and MobileNet Using Facial Images

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

For prompt intervention and support, early recognition of autism spectrum disorder is essential. DL based facial image analysis has become a viable non-invasive method for predicting ASD in recent years. Four well-known convolutional neural network architectures, AlexNet, InceptionV3, and MobileNet, are compared in this study to detect ASD using datasets of pediatric facial images. Key assessment parameters, including accuracy, precision, recall, F1-score, and training loss, were used to evaluate each model's performance. With a 95% accuracy rate, 94% precision rate, 95% recall rate, 95% F1-score, and the lowest loss value of 0.22, InceptionV3 outperformed the other models in the test. While LeNet maintained similar metrics but displayed a little larger loss, AlexNet and MobileNet both attained an accuracy of 93%. The findings show that while lightweight models like MobileNet provide a trade-off between accuracy and computational efficiency, making them perfect for real-time or embedded deployment, deeper and more complex architectures, like InceptionV3, are better suited for capturing discriminative facial features associated with ASD.

Article activity feed