Distinguishing L and H phenotypes of COVID-19 using a single x-ray image
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
Recent observations have shown that there are two types of COVID-19 response: an H phenotype with high lung elastance and weight, and an L phenotype with low measures 1 . H-type patients have pneumonia-like thickening of the lungs and require ventilation to survive; L-type patients have clearer lungs that may be injured by mechanical assistance 2,3 . As treatment protocols differ between the two types, and the number of ventilators is limited, it is vital to classify patients appropriately. To date, the only way to confirm phenotypes is through high-resolution computed tomography 2 . Here, we identify L- and H-type patients from their frontal chest x-rays using feature-embedded machine learning. We then apply the categorization to multiple images from the same patient, extending it to detect and monitor disease progression and recovery. The results give an immediate criterion for coronavirus triage and provide a methodology for respiratory diseases beyond COVID-19.
Article activity feed
-
SciScore for 10.1101/2020.04.27.20081984: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources Neural Network: We use a DenseNet-121 architecture34 trained on the ImageNet database35, performed using the PyTorch deep learning library36. ImageNetsuggested: (VGG-16, RRID:SCR_016494)Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were …
SciScore for 10.1101/2020.04.27.20081984: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources Neural Network: We use a DenseNet-121 architecture34 trained on the ImageNet database35, performed using the PyTorch deep learning library36. ImageNetsuggested: (VGG-16, RRID:SCR_016494)Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: Please consider improving the rainbow (“jet”) colormap(s) used on page 8. At least one figure is not accessible to readers with colorblindness and/or is not true to the data, i.e. not perceptually uniform.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
- No protocol registration statement was detected.
-