The trustworthiness and impact of trial preprints for COVID-19 decision-making: A methodological study

This article has been Reviewed by the following groups

Read the full article See related articles


Purpose: To assess the trustworthiness and impact of preprint trial reports during the COVID-19 pandemic. Data sources: WHO COVID-19 database and the L-OVE COVID-19 platform by the Epistemonikos Foundation (up to August 3rd, 2021) Design: We compare the characteristics of COVID-19 trials with and without preprints, estimate time to publication of COVID-19 preprint reports, describe discrepancies in key methods and results between preprint and published trial reports, report the number of retracted preprints and publications, and assess whether including versus excluding preprint reports affects meta-analytic estimates and the certainty of evidence. For the effects of eight therapies on mortality and mechanical ventilation, we performed meta-analyses including preprints and excluding preprints at 1 month, 3 months, and 6 months after the first trial addressing the therapy became available either as a preprint or publication (120 meta-analyses in total). Results: We included 356 trials, 101 of which are only available as preprints, 181 as journal publications, and 74 as preprints first and subsequently published in journals. Half of all preprints remain unpublished at six months and a third at one year. There were few important differences in key methods and results between trial preprints and their subsequent published reports. We identified four retracted trials, three of which were published in peer-reviewed journals. With two exceptions (2/60; 3.3%), point estimates were consistent between meta-analyses including versus excluding preprints as to whether they indicated benefit, no appreciable effect, or harm. There were nine comparisons (9/60; 15%) for which the rating of the certainty of evidence differed when preprints were included versus excluded, for four of these comparisons the certainty of evidence including preprints was higher and for five of these comparisons the certainty of evidence including preprints was lower. Limitations: The generalizability of our results is limited to COVID-19. Preprints that are subsequently published in journals may be the most rigorous and may not represent all trial preprints. Conclusion: We found no compelling evidence that preprints provide less trustworthy results than published papers. We show that preprints remain the only source of findings of many trials for several months, for a length of time that is unacceptable in a health emergency. We show that including preprints may affect the results of meta-analyses and the certainty of evidence. We encourage evidence users to consider data from preprints in contexts in which decisions are being made rapidly and evidence is being produced faster than can be peer-reviewed.

Article activity feed

  1. SciScore for 10.1101/2022.04.04.22273372: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Ethicsnot detected.
    Sex as a biological variablenot detected.
    RandomizationA validated machine learning model facilitates efficient identification of randomized trials (21).
    BlindingKey methods included description of the randomization process and allocation concealment, blinding of patients and healthcare providers, extent of and handling of missing outcome data, blinding of outcome assessors and adjudicators, and prespecification of outcomes and analyses.
    Power AnalysisWe did not perform a sample size calculation since we included all eligible trial reports identified through our living SRNMAS up to August 3rd, 2021.

    Table 2: Resources

    Eligible preprint and peer reviewed articles report trials that randomize patients with suspected, probable, or confirmed COVID-19 to drug treatments, antiviral antibodies and cellular therapies, placebo, or standard care or trials that randomize healthy participants exposed or unexposed to COVID-19 to prophylactic drugs, standard care, or placebo.
    suggested: (Antibodies-Online Cat# ABIN753133, RRID:AB_11206991)
    Software and Algorithms
    To assess risk of bias, reviewers, following training and calibration exercises, use a revision of the Cochrane tool for assessing risk of bias in randomized trials (RoB 2.0) (22) (Supplement 3).
    Cochrane tool
    suggested: None

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).

    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Strengths and limitations: The strengths of this study include the comprehensive search for, and inclusion of preprint and published COVID-19 trial reports and rigorous data collection. The generalizability of our results is, however, limited to COVID-19. Journals have expedited the publication of COVID-19 research and have been publishing more prolifically on COVID-19 than in other areas, which may reduce opportunity for revisions between preprints and their subsequent publications and may mean time to and predictors of publication may be different than in other research areas. Although the WHO COVID-19 database is a comprehensive source of published and preprint literature, it does not include all preprint servers—though preprint servers not covered by our search address other subjects and are unlikely to include COVID-19 trials. It is likely that preprint reports of trials that are subsequently published in journals represent the most rigorous or transparently reported preprints and that they are not representative of all trial preprints. To assess preprint trustworthiness, we compared reporting of key aspects of the methods and results between preprint and published trial reports. We acknowledge, however, that published trial reports may still contain errors and that posting trial reports as preprints may allow more errors to be identified prior to final publication. We report on the number of publications and preprints that were retracted. Preprints, however, may be less...

    Results from TrialIdentifier: No clinical trial numbers were referenced.

    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.

    Results from JetFighter: We did not find any issues relating to colormaps.

    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.