Prediction models for severe manifestations and mortality due to COVID ‐19: A systematic review

This article has been Reviewed by the following groups

Read the full article

Abstract

Background

Throughout 2020, the coronavirus disease 2019 (COVID‐19) has become a threat to public health on national and global level. There has been an immediate need for research to understand the clinical signs and symptoms of COVID‐19 that can help predict deterioration including mechanical ventilation, organ support, and death. Studies thus far have addressed the epidemiology of the disease, common presentations, and susceptibility to acquisition and transmission of the virus; however, an accurate prognostic model for severe manifestations of COVID‐19 is still needed because of the limited healthcare resources available.

Objective

This systematic review aims to evaluate published reports of prediction models for severe illnesses caused COVID‐19.

Methods

Searches were developed by the primary author and a medical librarian using an iterative process of gathering and evaluating terms. Comprehensive strategies, including both index and keyword methods, were devised for PubMed and EMBASE. The data of confirmed COVID‐19 patients from randomized control studies, cohort studies, and case–control studies published between January 2020 and May 2021 were retrieved. Studies were independently assessed for risk of bias and applicability using the Prediction Model Risk Of Bias Assessment Tool (PROBAST). We collected study type, setting, sample size, type of validation, and outcome including intubation, ventilation, any other type of organ support, or death. The combination of the prediction model, scoring system, performance of predictive models, and geographic locations were summarized.

Results

A primary review found 445 articles relevant based on title and abstract. After further review, 366 were excluded based on the defined inclusion and exclusion criteria. Seventy‐nine articles were included in the qualitative analysis. Inter observer agreement on inclusion 0.84 (95%CI 0.78–0.89). When the PROBAST tool was applied, 70 of the 79 articles were identified to have high or unclear risk of bias, or high or unclear concern for applicability. Nine studies reported prediction models that were rated as low risk of bias and low concerns for applicability.

Conclusion

Several prognostic models for COVID‐19 were identified, with varying clinical score performance. Nine studies that had a low risk of bias and low concern for applicability, one from a general public population and hospital setting. The most promising and well‐validated scores include Clift et al., 15 and Knight et al., 18 which seem to have accurate prediction models that clinicians can use in the public health and emergency department setting.

Article activity feed

  1. SciScore for 10.1101/2021.01.28.21250718: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board Statementnot detected.
    RandomizationWe included studies of randomized controlled trials, cohort studies, and case-control studies that discuss the possible short-term and long-term consequences of contracting COVID-19 in any clinical setting.
    Blindingnot detected.
    Power Analysisnot detected.
    Sex as a biological variablenot detected.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    Comprehensive strategies, including both index and keyword methods, were devised for PubMed and EMBASE.
    EMBASE
    suggested: (EMBASE, RRID:SCR_001650)
    The completed PubMed strategy is shown in Supplemental file 1.
    PubMed
    suggested: (PubMed, RRID:SCR_004846)

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Sample size was variable throughout studies, and lastly, analysis domain, such as overfitting were often seen as the limitation. These are commonly identified problems with building prediction models, and likely causes overfitting.16 Although many of the studies are reported out of urgency, several of them did not seem to follow TRIPOD guidelines,17 which likely contributed to the overall risk of bias for the included studies. After review of the current literature, our impression is that many of the reported prediction models have high/unclear risk of bias, thus it is not recommended that any particular tool should be used until further external validation is completed, unless the risk of bias is low. This review identified several implications for potential application of these prediction models in the appropriate clinical settings. As stated earlier, there are 41 studies that reported original models during the pandemic, and four studies 24,32,41,45 that reported the external validation models from the severity prediction models which existed before the COVID-19. Clift et al., Knight et al., reported the models that were developed from nationwide data which enabled robust re-sampling methods to minimize overfitting.13,15 We conclude that two of them13,15 can be useful tools (Table 3). Clift et al15 reported population-based risk algorithm in the UK, showing high levels of discrimination for deaths and hospital admissions due to covid-19. The absolute risks presented, howev...

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.