Consistency of covid-19 trial preprints with published reports and impact for decision making: retrospective review

This article has been Reviewed by the following groups

Read the full article

Abstract

To assess the trustworthiness (ie, complete and consistent reporting of key methods and results between preprint and published trial reports) and impact (ie, effects of preprints on meta-analytic estimates and the certainty of evidence) of preprint trial reports during the covid-19 pandemic.

Design

Retrospective review.

Data sources

World Health Organization covid-19 database and the Living Overview of the Evidence (L-OVE) covid-19 platform by the Epistemonikos Foundation (up to 3 August 2021).

Main outcome measures

Comparison of characteristics of covid-19 trials with and without preprints, estimates of time to publication of covid-19 preprints, and description of differences in reporting of key methods and results between preprints and their later publications. For the effects of eight treatments on mortality and mechanical ventilation, the study comprised meta-analyses including preprints and excluding preprints at one, three, and six months after the first trial addressing the treatment became available either as a preprint or publication (120 meta-analyses in total, 60 of which included preprints and 60 of which excluded preprints) and assessed the certainty of evidence using the GRADE framework.

Results

Of 356 trials included in the study, 101 were only available as preprints, 181 as journal publications, and 74 as preprints first and subsequently published in journals. The median time to publication of preprints was about six months. Key methods and results showed few important differences between trial preprints and their subsequent published reports. Apart from two (3.3%) of 60 comparisons, point estimates were consistent between meta-analyses including preprints versus those excluding preprints as to whether they indicated benefit, no appreciable effect, or harm. For nine (15%) of 60 comparisons, the rating of the certainty of evidence was different when preprints were included versus being excluded—the certainty of evidence including preprints was higher in four comparisons and lower in five comparisons.

Conclusion

No compelling evidence indicates that preprints provide results that are inconsistent with published papers. Preprints remain the only source of findings of many trials for several months—an unsuitable length of time in a health emergency that is not conducive to treating patients with timely evidence. The inclusion of preprints could affect the results of meta-analyses and the certainty of evidence. Evidence users should be encouraged to consider data from preprints.

Article activity feed

  1. SciScore for 10.1101/2022.04.04.22273372: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Ethicsnot detected.
    Sex as a biological variablenot detected.
    RandomizationA validated machine learning model facilitates efficient identification of randomized trials (21).
    BlindingKey methods included description of the randomization process and allocation concealment, blinding of patients and healthcare providers, extent of and handling of missing outcome data, blinding of outcome assessors and adjudicators, and prespecification of outcomes and analyses.
    Power AnalysisWe did not perform a sample size calculation since we included all eligible trial reports identified through our living SRNMAS up to August 3rd, 2021.

    Table 2: Resources

    Antibodies
    SentencesResources
    Eligible preprint and peer reviewed articles report trials that randomize patients with suspected, probable, or confirmed COVID-19 to drug treatments, antiviral antibodies and cellular therapies, placebo, or standard care or trials that randomize healthy participants exposed or unexposed to COVID-19 to prophylactic drugs, standard care, or placebo.
    antiviral
    suggested: (Antibodies-Online Cat# ABIN753133, RRID:AB_11206991)
    Software and Algorithms
    SentencesResources
    To assess risk of bias, reviewers, following training and calibration exercises, use a revision of the Cochrane tool for assessing risk of bias in randomized trials (RoB 2.0) (22) (Supplement 3).
    Cochrane tool
    suggested: None

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Strengths and limitations: The strengths of this study include the comprehensive search for, and inclusion of preprint and published COVID-19 trial reports and rigorous data collection. The generalizability of our results is, however, limited to COVID-19. Journals have expedited the publication of COVID-19 research and have been publishing more prolifically on COVID-19 than in other areas, which may reduce opportunity for revisions between preprints and their subsequent publications and may mean time to and predictors of publication may be different than in other research areas. Although the WHO COVID-19 database is a comprehensive source of published and preprint literature, it does not include all preprint servers—though preprint servers not covered by our search address other subjects and are unlikely to include COVID-19 trials. It is likely that preprint reports of trials that are subsequently published in journals represent the most rigorous or transparently reported preprints and that they are not representative of all trial preprints. To assess preprint trustworthiness, we compared reporting of key aspects of the methods and results between preprint and published trial reports. We acknowledge, however, that published trial reports may still contain errors and that posting trial reports as preprints may allow more errors to be identified prior to final publication. We report on the number of publications and preprints that were retracted. Preprints, however, may be less...

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.