COVID-19 classification of X-ray images using deep neural networks

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

No abstract available

Article activity feed

  1. SciScore for 10.1101/2020.10.01.20204073: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board Statementnot detected.
    Randomizationnot detected.
    Blindingnot detected.
    Power Analysisnot detected.
    Sex as a biological variablenot detected.

    Table 2: Resources

    No key resources detected.


    Results from OddPub: Thank you for sharing your code.


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    In this work we sought to address the limitations of previous studies in several ways. Importantly, we took care to include in our dataset CXRs from the same machines both for patients positive and negative to COVID-19. We used raw images without compression that may result in loss of features and introduction of source-dependent artifacts. Moreover, our dataset contained diverse data from four medical centers and was balanced between COVID-19 and non-COVID-19 images. A recent effort has shown more reliable results based on a larger, more uniformly sourced, dataset and comes closer to the goal of developing tools that can be used in clinical settings (11). They achieved a sensitivity of 88% with a specificity of 79%. Our approach improves on these notably solid results in terms of performance (sensitivity of 90.5% and specificity of 90.0%). As we show, this performance increase may have resulted from the image pre-processing, particularly the inclusion of augmentations and the addition of a segmentation channel. This leads to a performance increase of 8.4 percentage points in sensitivity and 1.1 percentage points in specificity (Table 2 – ResNet50 vs. ResNet50 no preprocessing), and also in balancing of the sensitivity and specificity results. Another novelty of our work is that we introduced a content-based image retrieval tool that identifies similar CXRs based on a metric defined by using the image embeddings given by the second to last layer of ResNet50. As ResNet50 was t...

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.