Preprint review services: Disrupting the scholarly communication landscape?

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Preprinting has gained considerable momentum, and in some fields it has turned into a well-established way to share new scientific findings. The possibility to organise quality control and peer review for preprints is also increasingly highlighted, leading to the development of preprint review services. We report a descriptive study of preprint review services with the aim of developing a systematic understanding of the main characteristics of these services, evaluating how they manage preprint review, and positioning them in the broader scholarly communication landscape. Our study shows that preprint review services have the potential to turn peer review into a more transparent and rewarding experience and to improve publishing and peer review workflows. We are witnessing the growth of a mixed system in which preprint servers, preprint review services and journals operate mostly in complementary ways. In the longer term, however, preprint review services may disrupt the scholarly communication landscape in a more radical way.

Article activity feed

  1. This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/10210714.

    This review reflects comments and contributions from Dibyendu Roy Chowdhury, Gary McDowell, Stephen Gabrielson and Ashley Farley. Review synthesized by Stephen Gabrielson.

    This study explores the emerging field of preprint review services, which aim to evaluate preprints prior to journal publication, and discuss how these peer-review services might add value to scholarly communications.

    Minor comments:

    • I think that this is a very useful and well thought through paper. Its applicability is wide ranging and as funders begin to think about implementing preprint policies it's helpful to consider the peer review and quality component. This gives funders more opportunity to support and implement use of these tools/organizations. I added a few comments where I think the broader preprint landscape or discussion could be considered.

    • I think there is an opportunity in the introduction to reference several of the recent studies and surveys conducted to investigate the attitude towards preprints in specific fields. It would also be helpful to have a longer, clearer definition of what the authors mean by preprint review services - particularly because I can see that eLife maybe wasn't included because it may be considered a journal that reviews preprints for journal publication, rather than a service reviewing preprints separate from curation.

    • In Figure 1, I particularly appreciated highlighting users beyond the traditional scholarly/academic community. I would like to suggest incorporating some related concepts across benefits the other groups - for example, for authors, there is the clear current advantage of cost, and those who are independent researchers, or have less funding available for publication, can use this to disseminate work in a way that is recognized by other scholars as "legitimate".

    • Also in regard to Figure 1, I don't necessarily think that these are "new forms" of peer review, since peer review still looks like a peer review, but maybe "new sources" or "new opportunities". This might be too radical to include but I can't help but think that this can be a way to review and disseminate information outside of the traditional system. A benefit for authors can be "not having to participate in the traditional publishing enterprise".

    • The logic of not using the term "peer" for platforms that review preprints makes sense. Did the authors consider removing "peer" altogether and comparing "preprint review services" with "journal-based review services"? Looking at this particularly from the lens of the Equity and Inclusion School, the definition of "peer" can be critiqued in the journal system much as the very valid rationale given here. My concern is that it "others" the preprint review as "not peers". This is just a minor comment/semantics discussion.

    • The term 'preprint review services' is well-defined and differentiated from 'journal-based peer review'. Additional clarification on the specific criteria used to select the services would be great.

    • In the third paragraph of section 3 "Overview of preprint review services", the authors describe how with some preprint review services, the "selection of reviewers does not depend on the editor's decision only". Could this be articulated very explicitly - does this mean anyone who is interested can: show up, review, and post their review of the preprint? Or are their nuances? I find this confusing with the next sentence. For example, on PREreview I can review preprints with no-one's "permission", but for some of these it sounds like there is some "gatekeeping" of who's review gets posted. Also, the last sentence of this paragraph on self-nomination of reviewers could be expanded. In light of my comments about how self-selection works - perhaps a clear articulation of how this differs from journal processes (e.g. just emailing an editor/the journal to ask to review?) would help.

    • In section 3 "Overview of preprint review services", PreLights is called out for investing in reviewer training. PREreview does a lot of this as well and I would call it out too.

    • I think that there is space in this paper to include a few of the studies conducted on assessing the differences between preprints and the journal version of record. While the community is quite concerned with quality control the data is showing that this concern may be a bit unfounded. Of course, there are many caveats, but I think it's important to highlight.

    • When referencing Twitter, I'm not sure how important it is to say "X previously known as Twitter"?

    • Section 4.1 mentions the peer review crisis. It might be important to state what the current peer review crisis is.

    • Section 4.4 talks about reviewer incentives – I would also be interested in a discussion on the incentives that institutions may be creating to entice faculty to do preprint reviews. Is there anything from the National Academies or HELIOS that mention how institutions can encourage preprint review? There is the "Statement on peer reviewed publications" from cOAlition S that might be worth calling out?  https://www.coalition-s.org/statement-on-peer-reviewed-publications/

    • Should there be a reference to the recent name change of Rapid Reviews: COVID-19 to Rapid Reviews: Infectious Diseases?

    • In paragraph 6 of section 4.4 ORCID is mentioned. The authors might consider expanding on the ORCID discussion here, to push ORCID for better recognition of some of these other review services so it can be included on the researcher's record. How many of these services are interoperable with ORCID? How can we improve the ability to write preprint reviews to ORCID records?

    • Section 6 is about how preprint review services fit into the publishing landscape. If eLife doesn't fit into the author's definition of a preprint review service, would it still be worthwhile to mention eLife here as a journal publisher who has taken a new publishing approach to preprints and preprint review?

    • Section 6 also includes a discussion on how preprint review services might be seen to add more complexity to the peer review system. I think that with more time they will have the opportunity to show that the journal system is no longer fit for this purpose.

    Comments on reporting:

    • Very much appreciate the availability of the data and the detailed methods. I feel like it would be easy to reproduce or compliment this study as more services become available. 

    Suggestions for future studies:

    • This has a particular use in education about publishing and preprints - combined with the previous four schools of thought paper, it gives a useful framing for teaching about scholarly publication, and so may be useful to look at in the context of training and transparency broadly in science education/professional development

    • I would love to see a follow-up with a focus on the cost of running these services.Twenty-three options is great in an era of experimentation but I have to think (with my funder hat on) that these may not be sustainable financially for the long term. There also might be greater opportunities for combining efforts.

    Competing interests

    The author declares that they have no competing interests.