Online COVID-19 diagnosis with chest CT images: Lesion-attention deep neural networks

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Chest computed tomography (CT) scanning is one of the most important technologies for COVID-19 diagnosis and disease monitoring, particularly for early detection of coronavirus. Recent advancements in computer vision motivate more concerted efforts in developing AI-driven diagnostic tools to accommodate the enormous demands for the COVID-19 diagnostic tests globally. To help alleviate burdens on medical systems, we develop a lesion-attention deep neural network (LA-DNN) to predict COVID-19 positive or negative with a richly annotated chest CT image dataset. Based on the textual radiological report accompanied with each CT image, we extract two types of important information for the annotations: One is the indicator of a positive or negative case of COVID-19, and the other is the description of five lesions on the CT images associated with the positive cases. The proposed data-efficient LA-DNN model focuses on the primary task of binary classification for COVID-19 diagnosis, while an auxiliary multi-label learning task is implemented simultaneously to draw the model’s attention to the five lesions associated with COVID-19. The joint task learning process makes it a highly sample-efficient deep neural network that can learn COVID-19 radiology features more effectively with limited but high-quality, rich-information samples. The experimental results show that the area under the curve (AUC) and sensitivity (recall), precision, and accuracy for COVID-19 diagnosis are 94.0%, 88.8%, 87.9%, and 88.6% respectively, which reach the clinical standards for practical use. A free online system is currently alive for fast diagnosis using CT images at the website https://www.covidct.cn/ , and all codes and datasets are freely accessible at our github address.

Article activity feed

  1. SciScore for 10.1101/2020.05.11.20097907: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    Experimental Models: Cell Lines
    SentencesResources
    Seven well-known deep neural networks are explored one at a time to be used as the backbone networks in the experiments, including VGG-16 [10], ResNet-18 [5], ResNet-50 [5], DenseNet-121 [7], DenseNet-169 [7], EfficientNet-b0 [12], and EfficientNet-b1 [12].
    VGG-16
    suggested: None
    Experimental Models: Organisms/Strains
    SentencesResources
    Seven well-known deep neural networks are explored one at a time to be used as the backbone networks in the experiments, including VGG-16 [10], ResNet-18 [5], ResNet-50 [5], DenseNet-121 [7], DenseNet-169 [7], EfficientNet-b0 [12], and EfficientNet-b1 [12].
    ResNet-18
    suggested: None
    DenseNet-169
    suggested: None
    EfficientNet-b0
    suggested: None
    Software and Algorithms
    SentencesResources
    This original dataset contains 345 samples of COVID-19 positive and 401 COVID-19 negative CT scans, which are collected from 760 research preprints related to COVID-19 from medRxiv and bioRxiv, posted from January 19th to March 25th 2020.
    bioRxiv
    suggested: (bioRxiv, RRID:SCR_003933)

    Results from OddPub: Thank you for sharing your code and data.


    Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: Please consider improving the rainbow (“jet”) colormap(s) used on page 9. At least one figure is not accessible to readers with colorblindness and/or is not true to the data, i.e. not perceptually uniform.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.