ReCoNet: Multi-level Preprocessing of Chest X-rays for COVID-19 Detection Using Convolutional Neural Networks

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Life-threatening COVID-19 detection from radiomic features has become a dire need of the present time for infection control and socio-economic crisis management around the world. In this paper, a novel convolutional neural network (CNN) architecture, ReCoNet (residual image-based COVID-19 detection network), is proposed for COVID-19 detection. This is achieved from chest X-ray (CXR) images shedding light on the preprocessing task considered to be very useful for enhancing the COVID-19 fingerprints. The proposed modular architecture consists of a CNN-based multi-level preprocessing filter block in cascade with a multi-layer CNN-based feature extractor and a classification block. A multi-task learning loss function is adopted for optimization of the preprocessing block trained end-to-end with the rest of the proposed network. Additionally, a data augmentation technique is applied for boosting the network performance. The whole network when pre-trained end-to-end on the CheXpert open source dataset, and trained and tested with the COVIDx dataset of 15,134 original CXR images yielded an overall benchmark accuracy, sensitivity, and specificity of 97.48%, 96.39%, and 97.53%, respectively. The immense potential of ReCoNet may be exploited in clinics for rapid and safe detection of COVID-19 globally, in particular in the low and middle income countries where RT-PCR labs and/or kits are in a serious crisis.

Article activity feed

  1. SciScore for 10.1101/2020.07.11.20149112: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    All the codes are written in Python and the Pytorch library is used to implement the neural networks.
    Python
    suggested: (IPython, RRID:SCR_001658)
    Pytorch
    suggested: (PyTorch, RRID:SCR_018536)

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.