Attention-Based Deep Learning Models: A Comparative Study of VGG19 and MobileNet for Chest X-Ray Image Classification

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study looks into a cutting-edge deep learning system to sort chest X-ray (CXR) pictures into four groups: Normal, Pneumonia, COVID-19, and Other Lung Diseases. The research team boosted VGG19 and Mobile Net designs with Multi-Head Attention tricks to get better at pulling out features and zeroing in on areas that matter for spotting diseases. They worked with a dataset of 15,000 tagged images, which they cleaned up using standardization and tweaking methods to make the models work better across the board. Tests showed that Mobile Net beat VGG19 hitting 98.1% accuracy, 0.97 precision, and 0.96 recall. Adding attention tricks made the diagnosis more precise for tricky cases like COVID-19. Plus, Mobile Net got up to speed faster and didn't need as much computing power making it a better fit for on-the-spot use. This work highlights how attention-boosted lightweight models could streamline how doctors diagnose issues, take some pressure off radiologists, and bring better care to places without a lot of resources. The next steps include fine-tuning the model, growing the dataset, and putting it to work in the real world to help with automated diagnosis support.

Article activity feed