Performance and Robustness of Machine Learning-based Radiomic COVID-19 Severity Prediction

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Objectives

This study investigated the performance and robustness of radiomics in predicting COVID-19 severity in a large public cohort.

Methods

A public dataset of 1110 COVID-19 patients (1 CT/patient) was used. Using CTs and clinical data, each patient was classified into mild, moderate, and severe by two observers: (1) dataset provider and (2) a board-certified radiologist. For each CT, 107 radiomic features were extracted. The dataset was randomly divided into a training (60%) and holdout validation (40%) set. During training, features were selected and combined into a logistic regression model for predicting severe cases from mild and moderate cases. The models were trained and validated on the classifications by both observers. AUC quantified the predictive power of models. To determine model robustness, the trained models was cross-validated on the inter-observer’s classifications.

Results

A single feature alone was sufficient to predict mild from severe COVID-19 with and (p<< 0.01). The most predictive features were the distribution of small size-zones (GLSZM-SmallAreaEmphasis) for provider’s classification and linear dependency of neighboring voxels (GLCM-Correlation) for radiologist’s classification. Cross-validation showed that both . In predicting moderate from severe COVID-19 , first-order-Median alone had sufficient predictive power of . For radiologist’s classification, the predictive power of the model increased to as the number of features grew from 1 to 5. Cross-validation yielded and .

Conclusions

Radiomics significantly predicted different levels of COVID-19 severity. The prediction was moderately sensitive to inter-observer classifications, and thus need to be used with caution.

Key points

  • Interpretable radiomic features can predict different levels of COVID-19 severity

  • Machine Learning-based radiomic models were moderately sensitive to inter-observer classifications, and thus need to be used with caution

Article activity feed

  1. SciScore for 10.1101/2020.09.07.20189977: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    During training, maximum relevance minimum redundancy (MRMR) algorithm and recursive feature elimination (RFE) method were used for radiomic feature selection implemented in MATLAB fscmrmr function and Python’s Scikit-learn, respectively [32].
    MATLAB
    suggested: (MATLAB, RRID:SCR_001622)
    Python’s
    suggested: (PyMVPA, RRID:SCR_006099)
    Scikit-learn
    suggested: (scikit-learn, RRID:SCR_002577)

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: Please consider improving the rainbow (“jet”) colormap(s) used on page 15. At least one figure is not accessible to readers with colorblindness and/or is not true to the data, i.e. not perceptually uniform.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.