Coronavirus disease 2019 (COVID-19): an evidence map of medical literature

This article has been Reviewed by the following groups

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Abstract

Background

Since the beginning of the COVID-19 outbreak in December 2019, a substantial body of COVID-19 medical literature has been generated. As of June 2020, gaps and longitudinal trends in the COVID-19 medical literature remain unidentified, despite potential benefits for research prioritisation and policy setting in both the COVID-19 pandemic and future large-scale public health crises.

Methods

In this paper, we searched PubMed and Embase for medical literature on COVID-19 between 1 January and 24 March 2020. We characterised the growth of the early COVID-19 medical literature using evidence maps and bibliometric analyses to elicit cross-sectional and longitudinal trends and systematically identify gaps.

Results

The early COVID-19 medical literature originated primarily from Asia and focused mainly on clinical features and diagnosis of the disease. Many areas of potential research remain underexplored, such as mental health, the use of novel technologies and artificial intelligence, pathophysiology of COVID-19 within different body systems, and indirect effects of COVID-19 on the care of non-COVID-19 patients. Few articles involved research collaboration at the international level (24.7%). The median submission-to-publication duration was 8 days (interquartile range: 4–16).

Conclusions

Although in its early phase, COVID-19 research has generated a large volume of publications. However, there are still knowledge gaps yet to be filled and areas for improvement for the global research community. Our analysis of early COVID-19 research may be valuable in informing research prioritisation and policy planning both in the current COVID-19 pandemic and similar global health crises.

Article activity feed

  1. SciScore for 10.1101/2020.05.07.20093674: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board Statementnot detected.
    Randomizationnot detected.
    Blindingnot detected.
    Power Analysisnot detected.
    Sex as a biological variablenot detected.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    Search strategy and selection criteria: We searched PubMed and Embase databases from 1 January to 24 March 2020 for the keywords “COVID” or “coronavirus” in the title or abstract.
    PubMed
    suggested: (PubMed, RRID:SCR_004846)
    Embase
    suggested: (EMBASE, RRID:SCR_001650)
    Literature selection and data extraction: All extracted literature entries were exported into Microsoft Excel for screening and selection.
    Microsoft Excel
    suggested: (Microsoft Excel, RRID:SCR_016137)
    Citation counts were retrieved from Google Scholar on 7 April 2020.
    Google Scholar
    suggested: (Google Scholar, RRID:SCR_008878)
    All bibliometric analyses were conducted using Python version 3.8.0 (Python Software Foundation, Delaware, USA).
    Python
    suggested: (IPython, RRID:SCR_001658)

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Limitations: There are several limitations to our study. Firstly, our search period did not fully encapsulate the period in which the majority of COVID-19 cases shifted from China to Europe and the USA. Therefore, the results do not fully describe the most recent research landscape due to the exclusion of newly published articles. Furthermore, we did not search other databases such as ClinicalTrials.gov, which would have excluded most clinical trial protocols. Also, only English language articles were analysed which resulted in the exclusion of articles from China — a substantial source of early COVID-19 literature. Lastly, the exclusion of non-peer reviewed research (those archived in medRxiv and bioRxiv) in our analysis may have neglected some new evidence but ensured the inclusion of only scientific results that have undergone peer review.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.