The collective wisdom in the COVID-19 research: Comparison and synthesis of epidemiological parameter estimates in preprints and peer-reviewed articles

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

No abstract available

Article activity feed

  1. SciScore for 10.1101/2020.07.22.20160291: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    We searched PubMed, Google Scholar and four popular preprint servers (i.e. medRxiv, bioRxiv, arRxiv and SSRN) for papers published from 23 January to 20 March, 2020 using the following terms: “2019-nCoV”, “coronavirus” or “COVID-19”.
    PubMed
    suggested: (PubMed, RRID:SCR_004846)
    Google Scholar
    suggested: (Google Scholar, RRID:SCR_008878)
    bioRxiv
    suggested: (bioRxiv, RRID:SCR_003933)
    To compare the parameter estimations and timeliness between the preprints and peer-reviewed papers, the distributions of four parameters estimates and TD of the two groups were separately plotted using the “seaborn” toolbox in Python 3·7·3.
    Python
    suggested: (IPython, RRID:SCR_001658)
    The bootstrap method was conducted by the built-in function “bootci” of Matlab R2017a.
    Matlab
    suggested: (MATLAB, RRID:SCR_001622)

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    This study has some limitations. Firstly, the data officially reported by China didn’t fully represent all the infections and deaths. Because in the early period, many patients died without diagnoses. And with the huge burden on the medical system in Hubei Province, it was impossible to detect and report all cases without omission. We can only prove the validity of our parameter estimations to a certain extent, but we can’t deny the reference value of collective wisdom from literatures. Secondly, in our study, the validity of the preprints was only compared and evaluated on the overall distribution, demonstrating their academic value on the task of estimating the four epidemiological parameters of COVID-19. However, this doesn’t mean that the result of a single preprint is accurate, nor can the conclusion of our study be arbitrarily extended to other fields. Scientists should treat preprints with caution and responsibility, and we should further standardize the publication process of preprints and guide the media to scientifically report preprints. In conclusion, our quantitative analysis shows that the overall validity of the preprints in parameter estimation is not less than that of the peer-reviewed papers. And the latest information on the epidemic can be obtained more sensitively through preprints. Furthermore, the simulation of the COVID-19 in China proved that the synthesis of whole parameters space is an effective way to reduce the uncertainty and to grasp the pattern...

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.