CovidMulti-Net: A Parallel-Dilated Multi Scale Feature Fusion Architecture for the Identification of COVID-19 Cases from Chest X-ray Images

This article has been Reviewed by the following groups

Read the full article

Abstract

The COVID-19 pandemic is an emerging respiratory infectious disease, having a significant impact on the health and life of many people around the world. Therefore, early identification of COVID-19 patients is the fastest way to restrain the spread of the pandemic. However, as the number of cases grows at an alarming pace, most developing countries are now facing a shortage of medical resources and testing kits. Besides, using testing kits to detect COVID-19 cases is a time-consuming, expensive, and cumbersome procedure. Faced with these obstacles, most physicians, researchers, and engineers have advocated for the advancement of computer-aided deep learning models to assist healthcare professionals in quickly and inexpensively recognize COVID-19 cases from chest X-ray (CXR) images. With this motivation, this paper proposes a “CovidMulti-Net” architecture based on the transfer learning concept to classify COVID-19 cases from normal and other pneumonia cases using three publicly available datasets that include 1341, 1341, and 446 CXR images from healthy samples and 902, 1564, and 1193 CXR images infected with Viral Pneumonia, Bacterial Pneumonia, and COVID-19 diseases. In the proposed framework, features from CXR images are extracted using three well-known pre-trained models, including DenseNet-169, ResNet-50, and VGG-19. The extracted features are then fed into a concatenate layer, making a robust hybrid model. The proposed framework achieved a classification accuracy of 99.4%, 95.2%, and 94.8% for 2-Class, 3-Class, and 4-Class datasets, exceeding all the other state-of-the-art models. These results suggest that the “CovidMulti-Net” framework’s ability to discriminate individuals with COVID-19 infection from healthy ones and provides the opportunity to be used as a diagnostic model in clinics and hospitals. We also made all the materials publicly accessible for the research community at: https://github.com/saikat15010/CovidMulti-Net-Architecture.git .

Article activity feed

  1. SciScore for 10.1101/2021.05.19.21257430: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    Experimental Models: Cell Lines
    SentencesResources
    In this work, we use three well-known up-to-date CNN pre-trained architectures, including DenseNet-169 [30], ResNet-50 [31], and VGGNet-19 [32], for the feature extraction task.
    VGGNet-19
    suggested: None
    Experimental Models: Organisms/Strains
    SentencesResources
    In this work, we use three well-known up-to-date CNN pre-trained architectures, including DenseNet-169 [30], ResNet-50 [31], and VGGNet-19 [32], for the feature extraction task.
    DenseNet-169
    suggested: None
    Software and Algorithms
    SentencesResources
    Data-Preprocessing Stage: Before feeding the images into the proposed “CovidMulti-Net” framework, we preprocess all the images, including image resizing (224*224*3 pixels), format conversion (.png), and NumPy array conversion.
    NumPy
    suggested: (NumPy, RRID:SCR_008633)
    ResNet-50: He et al. popularized the concept of using deeper layers within a network by implementing the ResNet architecture that secured the first position at ILSVRC and COCO 2015 competition with a minimal error rate of 3.5% [31].
    COCO
    suggested: (CoCo, RRID:SCR_010947)

    Results from OddPub: Thank you for sharing your code.


    Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We found bar graphs of continuous data. We recommend replacing bar graphs with more informative graphics, as many different datasets can lead to the same bar graph. The actual data may suggest different conclusions from the summary statistics. For more information, please see Weissgerber et al (2015).


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.