Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
Objective
To review and appraise the validity and usefulness of published and preprint reports of prediction models for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital or dying with the disease.
Design
Living systematic review and critical appraisal by the covid-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group.
Data sources
PubMed and Embase through Ovid, up to 17 February 2021, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020.
Study selection
Studies that developed or validated a multivariable covid-19 related prediction model.
Data extraction
At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).
Results
126 978 titles were screened, and 412 studies describing 731 new prediction models or validations were included. Of these 731, 125 were diagnostic models (including 75 based on medical imaging) and the remaining 606 were prognostic models for either identifying those at risk of covid-19 in the general population (13 models) or predicting diverse outcomes in those individuals with confirmed covid-19 (593 models). Owing to the widespread availability of diagnostic testing capacity after the summer of 2020, this living review has now focused on the prognostic models. Of these, 29 had low risk of bias, 32 had unclear risk of bias, and 545 had high risk of bias. The most common causes for high risk of bias were inadequate sample sizes (n=408, 67%) and inappropriate or incomplete evaluation of model performance (n=338, 56%). 381 models were newly developed, and 225 were external validations of existing models. The reported C indexes varied between 0.77 and 0.93 in development studies with low risk of bias, and between 0.56 and 0.78 in external validations with low risk of bias. The Qcovid models, the PRIEST score, Carr’s model, the ISARIC4C Deterioration model, and the Xie model showed adequate predictive performance in studies at low risk of bias. Details on all reviewed models are publicly available at https://www.covprecise.org/ .
Conclusion
Prediction models for covid-19 entered the academic literature to support medical decision making at unprecedented speed and in large numbers. Most published prediction model studies were poorly reported and at high risk of bias such that their reported predictive performances are probably optimistic. Models with low risk of bias should be validated before clinical implementation, preferably through collaborative efforts to also allow an investigation of the heterogeneity in their performance across various populations and settings. Methodological guidance, as provided in this paper, should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction modellers should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.
Systematic review registration
Protocol https://osf.io/ehc47/ , registration https://osf.io/wy245 .
Readers’ note
This article is the final version of a living systematic review that has been updated over the past two years to reflect emerging evidence. This version is update 4 of the original article published on 7 April 2020 ( BMJ 2020;369:m1328). Previous updates can be found as data supplements ( https://www.bmj.com/content/369/bmj.m1328/related#datasupp ). When citing this paper please consider adding the update number and date of access for clarity.
Article activity feed
-
SciScore for 10.1101/2020.03.24.20041020: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
Institutional Review Board Statement not detected. Randomization not detected. Blinding not detected. Power Analysis not detected. Sex as a biological variable not detected. Table 2: Resources
Software and Algorithms Sentences Resources We searched PubMed, EMBASE via Ovid, bioRxiv, medRxiv, and arXiv for research on COVID-19 published after 3rd January 2020. EMBASEsuggested: (EMBASE, RRID:SCR_001650)We used the publicly available publication list of the COVID-19 Living Systematic Review.6 This list contains studies on COVID-19 published on PubMed, EMBASE via Ovid, bioRxiv, and medRxiv, and is continuously updated. bioRxivsuggested: (bioRxiv, RRID:SCR_003933)6 We supplemented the Living Systematic Review list 6 … SciScore for 10.1101/2020.03.24.20041020: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
Institutional Review Board Statement not detected. Randomization not detected. Blinding not detected. Power Analysis not detected. Sex as a biological variable not detected. Table 2: Resources
Software and Algorithms Sentences Resources We searched PubMed, EMBASE via Ovid, bioRxiv, medRxiv, and arXiv for research on COVID-19 published after 3rd January 2020. EMBASEsuggested: (EMBASE, RRID:SCR_001650)We used the publicly available publication list of the COVID-19 Living Systematic Review.6 This list contains studies on COVID-19 published on PubMed, EMBASE via Ovid, bioRxiv, and medRxiv, and is continuously updated. bioRxivsuggested: (bioRxiv, RRID:SCR_003933)6 We supplemented the Living Systematic Review list 6 with hits from PubMed searching for “covid-19”, as this was at the moment of our search not included in the Living Systematic Review 6 search terms for PubMed. PubMedsuggested: (PubMed, RRID:SCR_004846)We further supplemented the Living Systematic Review 6 list with studies on COVID-19 retrieved from arXiv. arXivsuggested: (arXiv, RRID:SCR_006500)Results from OddPub: Thank you for sharing your data.
Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:Limitations of this study: With new publications on COVID-19 related prediction models that are currently quickly entering the medical literature, this systematic review cannot be viewed as an up-to-date list of all currently available COVID-19 related prediction models. Also, 24 of the studies we reviewed were only available as a preprint, and they might improve after peer review, when entering the official medical literature. We have also found other prediction models which are currently implemented in clinical practice without scientific publications 62 and web risk calculators launched for use while the scientific manuscript was still under review (and unavailable upon request).63 These unpublished models naturally fall outside the scope of this review of the literature. Implications for practice: All 31 reviewed prediction models were found to have a high risk of bias and evidence from independent external validation of these models is currently lacking. However, the urgency of diagnostic and prognostic models to assist in quick and efficient triage of patients in the COVID-19 pandemic may encourage clinicians to implement prediction models without sufficient documentation and validation. Although we cannot let perfect be the enemy of good, earlier studies have shown that models were of limited use in the context of a pandemic,64 and they may even cause more harm than good.65 Hence, we cannot recommend any model for use in practice at this point. We anticipate that more ...
Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
- No protocol registration statement was detected.
-
-
-