Public Engagement with COVID-19 Preprints: Bridging the Gap Between Scientists and Society

This article has been Reviewed by the following groups

Read the full article


The surge in preprint server use, especially during the Covid-19 pandemic, necessitates a reex-amination of their significance in the realm of science communication. This study rigorously investigates discussions surrounding preprints, framing them within the contexts of systems theory and boundary objects in scholarly communication. An analysis of a curated selection of COVID-19-related preprints from bioRxiv and medRxiv was conducted, emphasizing those that transitioned to journal publications, alongside the associated commentary and Twitter activity. The dataset was bifurcated into comments by biomedical experts versus those by non-experts, encompassing both academic and general public perspectives. Findings revealed that while peers dominated nearly half the preprint discussions, their presence in Twitter dia-logues was markedly diminished. Yet, intriguingly, the themes explored by these two groups diverged considerably. Preprints emerged as potent boundary objects, reinforcing, rather than obscuring, the delineation between scientific and non-scientific discourse. They serve as cru-cial conduits for knowledge dissemination and foster inter-disciplinary engagements. None-theless, the interplay between scientists and the wider public remains nuanced, necessitating strategies to incorporate these diverse discussions into the peer review continuum without compromising academic integrity and to cultivate sustained engagement from both experts and the broader community.

Article activity feed

  1. This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at

    As a signatory of Publish Your Reviews, I have committed to publish my peer reviews alongside the preprint version of an article. For more information, see

    "Preprints as a medium for public debate on the COVID-19 pandemic: Observations on the blurring of internal and external scientific communication" is an analysis of comments and tweets on a subset of COVID-19 preprints posted to bioRxiv and medRxiv. The author seeks to understand how preprints affect the boundaries between two groups: those "intra" and "extra" to the research community. The public perception and usage of preprints is not only important for the dissemination of knowledge, but also for opening up the traditional boundaries of the scientific process.

    As a reviewer, I am limited in that I lack experience with techniques such as Latent Dirichlet Allocation and thus cannot evaluate that part of the paper. With a background in biomedical sciences, I am unfamiliar with conventions in social science papers. Nevertheless, I offer here some suggestions that I believe may help me understand future versions of this paper.

    Major comments

    1. The paper hinges on a robust way to distinguish comments from within and outside ("intra" and "extra") the scientific community or field. However, more evidence could be provided to assure readers that this method is reliable. First, additional methodological details on the qualitative evaluation to classify comments as "intra" and "extra" described in the following paragraphs would be very helpful. Did the rater(s) have a background in the fields relevant to all the preprints in the sample? Second, determining or reducing the error rate in the classification by using additional raters would strengthen the claims.

    Furthermore, I would like to look at the complete data set of classified comments, but I have not been able to find it (perhaps I overlooked it), and the preprint states that no public data is available. Making these data available would make it possible to better evaluate the manuscript and to verify the strength of the claims. The examples presented in Table 1 leave me with questions. For example, the third "intra" comment in Table 1 contains an anecdote at the end of the quote, which to me is a strong signal that the comment is not following the norms of the bioresearch community. I personally would likely have put that one in the "extra" group.

    Some of the evidence presented, for example Figure 10 showing overlap of "intra" and "extra" comments on the Twitter network, suggests the method may benefit from further validation. Section 6.1 states, "The social network analysis also showed that very different Twitter communities participated with sometimes very different perspectives on the pandemic, containment measures, and vaccines. Both academic and non-academic comments can be found in all communities." If the second sentence is true, then does this mean there are not really separate academic and non-academic communities at all (and if so, what does this mean for the key questions posed in the paper about how preprints interface between the research community and general public)? An alternative explanation for the observation of overlap of "intra" and "extra" shown in Figure 10 might be that the method does not appropriately distinguish between academic and non-academic comments. Further discussion or analysis would be helpful.

    1. It is not clear to me how the discussion of systems theory and boundary objects directly results in the hypotheses presented. It might be helpful to the reader to more explicitly explain the results that would be expected if one or the other is true.

    2. I think that some of the hypotheses cannot be appropriately interrogated with the data presented: H2 pertains to the peer review process, but peer review reports are not analyzed here. H4 discusses interdisciplinary collaboration, but the analysis relies on only two groups: "intra" and "extra"; in order to differentiate interdisciplinary groups of experts, multiple dictionaries would probably be necessary. (For example, many different disciplines use words such as "model" etc so there are likely to be some interdisciplinary researchers in the "intra" group).

    Minor comments

    1. Line 77: Section 2.1 reads as though the author has a hierarchical organization of these various concepts in mind, but I think readers may find this distracting, and it seems unnecessary for the purpose of the discussion.

    2. Figure 1: The number of comments is different in the figure (2,095) than it is in the text (1,992) - please clarify.

    3. In Table A1, I would dispute that many researchers consider "worthy of publication" to relate to significance exclusively. Furthermore, "innovative" is to me more likely to be associated with novelty than significance. "well-supported," "sound," "comprehensive," "rigorous," sounds more like soundness than relevance.

    4. Line 336: I was curious why the authors of the 10k tweets were not used for analysis, and instead, it seems that a second group was created?

    5. Figure 2: Are any of the differences in word frequencies observed in Figure 2 statistically significant? More importantly, some words report two values for intra or two values for extra, not one of each. I assume this is a rendering problem as in each word with one of each group, the extra is on top. The same problem reappears with figure 6.

    6. Line 412 and throughout: Recommend labeling topic assignments with text labels rather than just their numbers in the text and figure 4.

    7. Line 449: Did you try normalizing for the shorter length of twitter posts, given the frequency of word use that appears in the preprint server comments?

    8. Figure 3 and 7: I find it difficult to compare across the two LDA topic figures. Perhaps presenting them side by side or as tables would be helpful.

    9. Figure 9 and 10: I'm not clear on the value of the colors in figure 9; instead, the labels could be printed on figure 10 (with lighter colors used to represent the points).

    Thank you for the opportunity to provide comments on this paper, and please let me know if further discussion would be helpful!

    -- Jessica Polka

    Competing interests

    The author declares that they have no competing interests.

  2. Published on OSF Preprints

    We couldn't get version information from OSF Preprints.