Cross-sectional study of preprints and final journal publications from COVID-19 studies: discrepancies in results reporting and spin in interpretation

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

To compare results reporting and the presence of spin in COVID-19 study preprints with their finalised journal publications.

Design

Cross-sectional study.

Setting

International medical literature.

Participants

Preprints and final journal publications of 67 interventional and observational studies of COVID-19 treatment or prevention from the Cochrane COVID-19 Study Register published between 1 March 2020 and 30 October 2020.

Main outcome measures

Study characteristics and discrepancies in (1) results reporting (number of outcomes, outcome descriptor, measure, metric, assessment time point, data reported, reported statistical significance of result, type of statistical analysis, subgroup analyses (if any), whether outcome was identified as primary or secondary) and (2) spin (reporting practices that distort the interpretation of results so they are viewed more favourably).

Results

Of 67 included studies, 23 (34%) had no discrepancies in results reporting between preprints and journal publications. Fifteen (22%) studies had at least one outcome that was included in the journal publication, but not the preprint; eight (12%) had at least one outcome that was reported in the preprint only. For outcomes that were reported in both preprints and journals, common discrepancies were differences in numerical values and statistical significance, additional statistical tests and subgroup analyses and longer follow-up times for outcome assessment in journal publications.

At least one instance of spin occurred in both preprints and journals in 23/67 (34%) studies, the preprint only in 5 (7%), and the journal publications only in 2 (3%). Spin was removed between the preprint and journal publication in 5/67 (7%) studies; but added in 1/67 (1%) study.

Conclusions

The COVID-19 preprints and their subsequent journal publications were largely similar in reporting of study characteristics, outcomes and spin. All COVID-19 studies published as preprints and journal publications should be critically evaluated for discrepancies and spin.

Article activity feed

  1. SciScore for 10.1101/2021.04.12.21255329: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    EthicsField Sample Permit: The protocol for this study was registered in the Open Science Framework.[19] Data Source and Search Strategy: We sampled studies from the Cochrane COVID-19 Study Register (https://covid-19.cochrane.org/), a freely-available, continually-updated, annotated reference collection of human primary studies on COVID-19, including interventional, observational,
    Sex as a biological variablenot detected.
    RandomizationWe extracted the text relevant to each discrepancy: Spin: Studies have used a variety of methods to measure spin in randomized controlled trials and observational studies.[
    Blindingnot detected.
    Power Analysisnot detected.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    The Cochrane register prioritizes medRxiv as a preprint source as an internal sensitivity analysis in May 2020 showed that 90% (166/185) of the preprints that were eligible for systematic reviews came from this source.
    Cochrane register
    suggested: None
    The register also includes preprints records sourced from PubMed.
    PubMed
    suggested: (PubMed, RRID:SCR_004846)
    All extracted data from the included studies was stored in REDCap, a secure web-based application for the collection and management of data.[20] We extracted data from the medRxiv page and PDF for preprints and the online publication or PDF for journal articles.
    REDCap
    suggested: (REDCap, RRID:SCR_003445)

    Results from OddPub: Thank you for sharing your data.


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Journal publications also included more tables and figures, and more extensive discussion of limitations. Some of these differences may be due to more comprehensive reporting requirements of journals. Other changes, such as more information on the study population or greater discussion of limitations, may be due to requests for additional information during peer review. Since preprints are posted without peer review and most journal publications in our sample were likely to be peer reviewed because they were identified from PubMed, our study indirectly investigates the impact of peer review on research articles. Articles may not have been peer-reviewed in similar ways. Authors may have made changes in their papers that were independent of peer review. We observed instances where peer review appeared to improve clarity (e.g, more detail on measurements)[32,33] or interpretation (e.g. requirement to present risk differences rather than just n (%) per treatment group).[34,35] Empirical evidence on the impact of peer review on manuscript quality is scarce. A study comparing submitted and published manuscripts found that the number of changes was relatively small and, similar to our study, primarily involved adding or clarifying information.[13] Some of the changes requested by peer reviewers were classified as having a negative impact on reporting, such as the addition of post-hoc subgroup analyses, statistical analyses that were not prespecified, or optimistic conclusions that d...

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • No funding statement was detected.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.