Preprint review services: Disrupting the scholarly communication landscape?
Curation statements for this article:-
Curated by MetaROR
Editorial Assessment
The authors present a descriptive analysis of preprint review services. The analysis focuses on the services’ relative characteristics and differences in preprint review management. The authors conclude that such services have the potential to improve the traditional peer review process. Two metaresearchers reviewed the article. They note that the background section and literature review are current and appropriate, the methods used to search for preprint servers are generally sound and sufficiently detailed to allow for reproduction, and the discussion related to anonymizing articles and reviews during the review process is useful. The reviewers also offered suggestions for improvement. They point to terminology that could be clarified. They suggest adding URLs for each of the 23 services included in the study. Other suggestions include explaining why overlay journals were excluded, clarifying the limitation related to including only English-language platforms, archiving rawer input data to improve reproducibility, adding details related to the qualitative text analysis, discussing any existing empirical evidence about misconduct as it relates to different models of peer review, and improving field inclusiveness by avoiding conflation of “research” and “scientific research.”
The reviewers and I agree that the article is a valuable contribution to the metaresearch literature related to peer review processes.
Handling Editor: Kathryn Zeiler
Competing interest: I am co-Editor-in-Chief of MetaROR working with Ludo Waltman, a co-author of the article and co-Editor-in-Chief of MetaROR
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (PREreview)
- Preprint review and curation (mark2d2)
- ASAPbio Meta-Research Crowd PREreviews (prereview)
- Evaluated articles (MetaROR)
Abstract
Preprinting has gained considerable momentum, and in some fields it has turned into a well-established way to share new scientific findings. The possibility to organise quality control and peer review for preprints is also increasingly highlighted, leading to the development of preprint review services. We report a descriptive study of preprint review services with the aim of developing a systematic understanding of the main characteristics of these services, evaluating how they manage preprint review, and positioning them in the broader scholarly communication landscape. Our study shows that preprint review services have the potential to turn peer review into a more transparent and rewarding experience and to improve publishing and peer review workflows. We are witnessing the growth of a mixed system in which preprint servers, preprint review services and journals operate mostly in complementary ways. In the longer term, however, preprint review services may disrupt the scholarly communication landscape in a more radical way.
Article activity feed
-
Editorial Assessment
The authors present a descriptive analysis of preprint review services. The analysis focuses on the services’ relative characteristics and differences in preprint review management. The authors conclude that such services have the potential to improve the traditional peer review process. Two metaresearchers reviewed the article. They note that the background section and literature review are current and appropriate, the methods used to search for preprint servers are generally sound and sufficiently detailed to allow for reproduction, and the discussion related to anonymizing articles and reviews during the review process is useful. The reviewers also offered suggestions for improvement. They point to terminology that could be clarified. They suggest adding URLs for each of the 23 services included in the study. …
Editorial Assessment
The authors present a descriptive analysis of preprint review services. The analysis focuses on the services’ relative characteristics and differences in preprint review management. The authors conclude that such services have the potential to improve the traditional peer review process. Two metaresearchers reviewed the article. They note that the background section and literature review are current and appropriate, the methods used to search for preprint servers are generally sound and sufficiently detailed to allow for reproduction, and the discussion related to anonymizing articles and reviews during the review process is useful. The reviewers also offered suggestions for improvement. They point to terminology that could be clarified. They suggest adding URLs for each of the 23 services included in the study. Other suggestions include explaining why overlay journals were excluded, clarifying the limitation related to including only English-language platforms, archiving rawer input data to improve reproducibility, adding details related to the qualitative text analysis, discussing any existing empirical evidence about misconduct as it relates to different models of peer review, and improving field inclusiveness by avoiding conflation of “research” and “scientific research.”
The reviewers and I agree that the article is a valuable contribution to the metaresearch literature related to peer review processes.
Handling Editor: Kathryn Zeiler
Competing interest: I am co-Editor-in-Chief of MetaROR working with Ludo Waltman, a co-author of the article and co-Editor-in-Chief of MetaROR
-
This manuscript examines preprint review services and their role in the scholarly communications ecosystem. It seems quite thorough to me. In Table 1 they list many peer-review services that I was unaware of e.g. SciRate and Sinai Immunology Review Project.
To help elicit critical & confirmatory responses for this peer review report I am trialling Elsevier’s suggested “structured peer review” core questions, and treating this manuscript as a research article.
Introduction
Is the background and literature section up to date and appropriate for the topic?
Yes.
Are the primary (and secondary) objectives clearly stated at the end of the introduction?
No. Instead the authors have chosen to put the two research questions on page 6 in the methods section. I wonder if they ought to be moved into the introduction – the research questions are not …
This manuscript examines preprint review services and their role in the scholarly communications ecosystem. It seems quite thorough to me. In Table 1 they list many peer-review services that I was unaware of e.g. SciRate and Sinai Immunology Review Project.
To help elicit critical & confirmatory responses for this peer review report I am trialling Elsevier’s suggested “structured peer review” core questions, and treating this manuscript as a research article.
Introduction
Is the background and literature section up to date and appropriate for the topic?
Yes.
Are the primary (and secondary) objectives clearly stated at the end of the introduction?
No. Instead the authors have chosen to put the two research questions on page 6 in the methods section. I wonder if they ought to be moved into the introduction – the research questions are not methods in themselves. Might it be better to state the research questions first and then detail the methods one uses to address those questions afterwards? [as Elsevier’s structured template seems implicitly to prefer.
Methods
Are the study methods (including theory/applicability/modelling) reported in sufficient detail to allow for their replicability or reproducibility?
I note with approval that the version number of the software they used (ATLAS.ti) was given.
I note with approval that the underlying data is publicly archived under CC BY at figshare.
The Atlas.ti report data spreadsheet could do with some small improvement – the column headers are little cryptic e.g. “Nº ST “ and “ST” which I eventually deduced was Number of Schools of Thought and Schools of Thought (?)
Is there a rawer form of the data that could be deposited with which to evidence the work done? The Atlas.ti report spreadsheet seemed like it was downstream output data from Atlas.ti. What was the rawer input data entered into Atlas.ti? Can this be archived somewhere in case researchers want to reanalyse it using other tools and methods.
I note with disapproval that Atlas.ti is proprietary software which may hinder the reproducibility of this work. Nonetheless I acknowledge that Atlas.ti usage is somewhat ‘accepted’ in social sciences despite this issue.
I think the qualitative text analysis is a little vague and/or under-described: “Using ATLAS.ti Windows (version 23.0.8.0), we carried out a qualitative analysis of text from the relevant sites, assigning codes covering what they do and why they have chosen to do it that way.” That’s not enough detail. Perhaps an example or two could be given? Was inter-rater reliability performed when ‘assigning codes’ ? How do we know the ‘codes’ were assigned accurately?
Are statistical analyses, controls, sampling mechanism, and statistical reporting (e.g., P-values, CIs, effect sizes) appropriate and well described?
This is a descriptive study (and that’s fine) so there aren’t really any statistics on show here other than simple ‘counts’ (of Schools of Thought) in this manuscript. There are probably some statistical processes going on within the proprietary qualitative analysis of text done in ATLAS.ti but it is under described and so hard for me to evaluate.
Results
Is the results presentation, including the number of tables and figures, appropriate to best present the study findings?
Yes. However, I think a canonical URL to each service should be given. A URL is very useful for disambiguation, to confirm e.g. that the authors mean this Hypothesis (www.hypothes.is) and NOT this Hypothesis (www.hyp.io). I know exactly which Hypothesis is the one the authors are referring to but we cannot assume all readers are experts 😊
Optional suggestion: I wonder if the authors couldn’t present the table data in a slightly more visual and/or compact way? It’s not very visually appealing in its current state. Purely as an optional suggestion, to make the table more compact one could recode the answers given in one or more of the columns 2, 3 and 4 in the table e.g. "all disciplines = ⬤ , biomedical and life sciences = ▲, social sciences = ‡ , engineering and technology = † ". I note this would give more space in the table to print the URLs for each service that both reviewers have requested.
———————————————————————————————
| Service name | Developed by | Scientific disciplines | Types of outputs |
| Episciences | Other | ⬤ | blah blah blah. |
| Faculty Opinions | Individual researcher | ▲ | blah blah blah. |
| Red Team Market | Individual researcher | ‡ | blah blah blah. |
———————————————————————————————
The "Types of outputs" column might even lend themselves to mini-colour-pictograms (?) which could be more concise and more visually appealing? A table just of text, might be scientifically 'correct' but it is incredibly dull for readers, in my opinion.
Are additional sub-analyses or statistical measures needed (e.g., reporting of CIs, effect sizes, sensitivity analyses)?
No / Not applicable.
Discussion
Is the interpretation of results and study conclusions supported by the data and the study design?
Yes.
Have the authors clearly emphasized the limitations of their study/theory/methods/argument?
No. Perhaps a discussion of the linguistic/comprehension bias of the authors might be appropriate for this manuscript. What if there are ‘local’ or regional Chinese, Japanese, Indonesian or Arabic language preprint review services out there? Would this authorship team really be able to find them?
Additional points:
Perhaps the points made in this manuscript about financial sustainability (p24) are a little too pessimistic. I get it, there is merit to this argument, but there is also some significant investment going on there if you know where to look. Perhaps it might be worth citing some recent investments e.g. Gates -> PREreview (2024) https://content.prereview.org/prereview-welcomes-funding/ and Arcadia’s $4 million USD to COAR for the Notify Project which supports a range of preprint review communities including Peer Community In, Episciences, PREreview and Harvard Library. (source: https://coar-repositories.org/news-updates/coar-welcomes-significant-funding-for-the-notify-project/ )
Although I note they are mentioned, I think more needs to be written about the similarity and overlap between ‘overlay journals’ and preprint review services. Are these arguably not just two different terms for kinda the same thing? If you have Peer Community In which has it’s overlay component in the form of the Peer Community Journal, why not mention other overlay journals like Discrete Analysis and The Open Journal of Astrophysics. I think Peer Community In (& it’s PCJ) is the go-to example of the thin-ness of the line the separates (or doesn’t!) overlay journals and preprint review services. Some more exposition on this would be useful.
-
Thank you very much for the opportunity to review the preprint titled “Preprint review services: Disrupting the scholarly communication landscape?” (https://doi.org/10.31235/osf.io/8c6xm) The authors review services that facilitate peer review of preprints, primarily in the STEM (science, technology, engineering, and math) disciplines. They examine how these services operate and their role within the scholarly publishing ecosystem. Additionally, the authors discuss the potential benefits of these preprint peer review services, placing them in the context of tensions in the broader peer review reform movement. The discussions are organized according to four “schools of thought” in peer review reform, as outlined by Waltman et al. (2023), which provides a useful framework for analyzing the services. In terms of methodology, I believe the …
Thank you very much for the opportunity to review the preprint titled “Preprint review services: Disrupting the scholarly communication landscape?” (https://doi.org/10.31235/osf.io/8c6xm) The authors review services that facilitate peer review of preprints, primarily in the STEM (science, technology, engineering, and math) disciplines. They examine how these services operate and their role within the scholarly publishing ecosystem. Additionally, the authors discuss the potential benefits of these preprint peer review services, placing them in the context of tensions in the broader peer review reform movement. The discussions are organized according to four “schools of thought” in peer review reform, as outlined by Waltman et al. (2023), which provides a useful framework for analyzing the services. In terms of methodology, I believe the authors were thorough in their search for preprint review services, especially given that a systematic search might be impractical.
As I see it, the adoption of preprints and reforming peer review are key components of the move towards improving scholarly communication and open research. This article is a useful step along that journey, taking stock of current progress, with a discussion that illuminates possible paths forward. It is also well-structured and easy for me to follow. I believe it is a valuable contribution to the metaresearch literature.
On a high level, I believe the authors have made a reasonable case that preprint review services might make peer review more transparent and rewarding for all involved. Looking forward, I would like to see metaresearch which gathers further evidence that these benefits are truly being realised.
In this review, I will present some general points which merit further discussion or clarification to aid an uninitiated reader. Additionally, I raise one issue regarding how the authors framed the article and categorised preprint review services and the disciplines they serve. In my view, this problem does not fundamentally undermine the robust search, analyses, and discussion in this paper, but it risks putting off some researchers and constrains how broadly one should derive conclusions.
General comments
Some metaresearchers may be aware of preprints, but not all readers will be familiar with them. I suggest briefly defining what they are, how they work, and which types of research have benefited from preprints, similar to how “preprint review service” is clearly defined in the introduction.
Regarding Waltman et al.’s (2023) “Equity & Inclusion” school of thought, does it specifically aim for “balanced” representation by different groups as stated in this article? There is an important difference between “balanced” versus “equitable” representation, and I would like to see it addressed in this text.
Another analysis I would like to see is whether any of the 23 services reviewed present any evidence that their approach has improved research quality. For instance, the discussion on peer review efficiency and incentives states that there is currently “no hard evidence” that journals want to utilise reviews by Rapid Reviews: COVID-19, and that “not all journals are receptive” to partnerships. Are journals skeptical of whether preprint review services could improve research quality? Or might another dynamic be at work?
The authors cite Nguyen et al. (2015) and Okuzaki et al. (2019), stating that peer review is often “overloaded”. I would like to see a clearer explanation by what “overloaded” means in this context so that a reader does not have to read the two cited papers.
To the best of my understanding, one of the major sticking points in peer review reform is whether to anonymise reviewers and/or authors. Consequently, I appreciate the comprehensive discussion about this issue by the authors.
However, I am only partially convinced by the statement that double anonymity is “essentially incompatible” with preprint review. For example, there may be, as yet not fully explored, ways to publish anonymous preprints with (a) a notice that it has been submitted to, or is undergoing, peer review; and (b) that the authors will be revealed once peer review has been performed (e.g. at least one review has been published). This would avoid the issue of publishing only after review is concluded as is the case for Hypothesis and Peer Community In.
Additionally, the authors describe 13 services which aim to “balance transparency and protect reviewers’ interests”. This is a laudable goal, but I am concerned that framing this as a “balance” implies a binary choice, and that to have more of one, we must lose an equal amount of the other. Thinking only in terms of “balance” prevents creative, win-win solutions. Could a case be made for non-anonymity to be complemented by a reputation system for authors and reviewers? For example, major misconduct (e.g. retribution against a critical review) would be recorded in that system and dissuade bad actors. Something similar can already be seen in the reviewer evaluation system of CrowdPeer, which could plausibly be extended or modified to highlight misconduct.
I also note that misconduct and abusive behaviour already occur even in fully or partially anonymised peer review, and they are not limited to the review or preprints. While I am not aware of existing literature on this topic, academics’ fears seem reasonable. For example, there is at least anecdotal testimonies that a reviewer would deliberately reject a paper to retard the progress of a rival research group, while taking the ideas of that paper and beating their competitors to winning a grant. Or, a junior researcher might refrain from giving a negative review out of fear that the senior researcher whose work they are reviewing might retaliate. These fears, real or not, seem to play a part in the debates about if and how peer review should (or should not) be anonymised. I would like to see an exploration of whether de-anonimisation will improve or worsen this behaviour and in what contexts. And if such studies exist, it would be good to discuss them in this paper.
I found it interesting that almost all preprint review services claim to be complementary to, and not compete with, traditional journal-based peer review. The methodology described in this article cannot definitely explain what is going on, but I suspect there may be a connection between this aversion to compete with traditional journals, and (a) the skepticism of journals towards partnering with preprint review services and (b) the dearth of publisher-run options. I hypothesise that there is a power dynamic at play, where traditional publishers have a vested interest in maintaining the power they hold over scholarly communication, and that preprint review services stress their complementarity (instead of competitiveness) as a survival mechanism. This may be an avenue for further metaresearch.
To understand preprints from which fields of research are actually present on the services categorised under “all disciplines,” I used the Random Integer Set Generator by the Random.org true random number service (https://www.random.org/integer-sets/) to select five services for closer examination: Hypothesis, Peeriodicals, PubPeer, Qeios, and Researchers One. Of those, I observed that Hypothesis is an open source web annotation service that allows commenting on and discussion of any web page on the Internet regardless of whether it is research or preprints. Hypothesis has a sub-project named TRiP (Transparent Review in Preprints), which is their preprint review service in collaboration with Cold Spring Harbor Laboratory. It is unclear to me why the authors listed Hypothesis as the service name in Table 1 (and elsewhere) instead of TRiP (or other similar sub-projects). In addition, Hypothesis seems to be framed as a generic web annotation service that is used by some as a preprint review tool. This seems fundamentally different from others who are explicitly set up as preprint review services. This difference seems noteworthy to me.
To aid readers, I also suggest including hyperlinks to the 23 services reviewed in this paper. My comments on disciplinary representation in these services are elaborated further below.
One minor point of curiosity is that several services use an “automated tool” to select reviewers. It would be helpful to describe in this paper exactly what those tools are and how they work, or report situations where services do not explain it.
Lastly, what did the authors mean by “software heritage” in section 6? Are they referring to the organisation named Software Heritage (https://www.softwareheritage.org/) or something else? It is not clear to me how preprint reviews would be deposited in this context.
Respecting disciplinary and epistemic diversity
In the abstract and elsewhere in the article, the authors acknowledge that preprints are gaining momentum “in some fields” as a way to share “scientific” findings. After reading this article, I agree that preprint review services may disrupt publishing for research communities where preprints are in the process of being adopted or already normalised. However, I am less convinced that such disruption is occurring, or could occur, for scholarly publishing more generally.
I am particularly concerned about the casual conflation of “research” and “scientific research” in this article. Right from the start, it mentions how preprints allow sharing “new scientific findings” in the abstract, stating they “make scientific work available rapidly.” It also notes that preprints enable “scientific work to be accessed in a timely way not only by scientists, but also…” This framing implies that all “scholarly communication,” as mentioned in the title, is synonymous with “scientific communication.” Such language excludes researchers who do not typically identify their work as “scientific” research. Another example of this conflation appears in the caption for Figure 1, which outlines potential benefits of preprint review services. Here, “users” are defined as “scientists, policymakers, journalists, and citizens in general.” But what about researchers and scholars who do not see themselves as “scientists”?
Similarly, the authors describe the 23 preprint review services using six categories, one of which is “scientific discipline”. One of those disciplines is called “humanities” in the text, and Table 1 lists it as a discipline for Science Open Reviewed. Do the authors consider “humanities” to be a “scientific” discipline? If so, I think that needs to be justified with very strong evidence.
Additionally, Waltman et al.’s four schools of thought for peer review reform works well with the 23 services analysed. However, at least three out of the four are explicitly described as improving “scientific” research.
Related to the above are how the five “scientific disciplines” are described as the “usual organisation” of the scholarly communication landscape. On what basis should they be considered “usual”? In this formulation, research in literature, history, music, philosophy, and many other subjects would all be lumped together into the “humanities”, which sit at the same hierarchical level as “biomedical and life sciences”, arguably a much more specific discipline. My point is not to argue for a specific organisation of research disciplines, but to highlight a key epistemic assumption underlying the whole paper that comes across as very STEM-centric (science, technology, engineering, and math).
How might this part of the methodology affect the categories presented in Table 1? “Biomedical and life sciences” appear to be overrepresented compared to other “disciplines”. I’d like to see a discussion that examines this pattern, and considers why preprint review services (or maybe even preprints more generally) appear to cover mostly the biomedical or physical sciences.
In addition, there are 12 services described as serving “all disciplines”. I believe this paper can be improved by at least a qualitative assessment of the diversity of disciplines actually represented on those services. Because it is reported that many of these service stress improving the “reproducibility” of research, I suspect most of them serve disciplines which rely on experimental science.
I randomly selected five services for closer examination, as mentioned above. Of those, only Qeios has demonstrated an attempt to at least split “arts and humanities” into subfields. The others either don’t have such categories altogether, or have a clear focus on a few disciplines (e.g. life sciences for Hypothesis/TRiP). In all cases I studied, there is a heavy focus on STEM subjects, especially biology or medical research. However, they are all categorised by the authors as serving “all disciplines”.
If preprint review services originate from, or mostly serve, a narrow range of STEM disciplines (especially experiment-based ones), it would be worth examining why that is the case, and whether preprints and reviews of them could (or could not) serve other disciplines and epistemologies.
It is postulated that preprint review services might “disrupt the scholarly communication landscape in a more radical way”. Considering the problematic language I observed, what about fields of research where peer-reviewed journal publications are not the primary form of communication? Would preprint review services disrupt their scholarly communications?
To be clear, my concern is not just the conflation of language in a linguistic sense but rather inequitable epistemic power. I worry that this conflation would (a) exclude, minoritise, and alienate researchers of diverse disciplines from engaging with metaresearch; and (b) blind us from a clear pattern in these 23 services, that is their strong focus on the life sciences and medical research and a discussion of why that might be the case. Critically, what message are we sending to, for example, a researcher of 18th century French poetry with the language and framing of this paper? I believe the way “disciplines” are currently presented here poses a real risk of devaluing and minoritising certain subject areas and ways of knowing. In its current form, I believe that while this paper is a very valuable contribution, one should not derive from it any conclusions which apply to scholarly publishing as a whole.
The authors have demonstrated inclusive language elsewhere. For example, they have consciously avoided “peer” when discussing preprint review services, clearly contrasting them to “journal-based peer review”. Therefore, I respectfully suggest that similar sensitivity be adopted to avoid treating “scientific research” and “research” as the same thing. A discussion, or reference to existing works, on the disciplinary skew of preprints (and reviews of them) would also add to the intellectual rigour of this already excellent piece.
Overall, I believe this paper is a valuable reflection on the state of preprints and services which review them. Addressing the points I raised, especially the use of more inclusive language with regards to disciplinary diversity, would further elevate its usefulness in the metaresearch discourse. Thank you again for the chance to review.
Signed:
Dr Pen-Yuan Hsing (ORCID ID: 0000-0002-5394-879X)
University of Bristol, United Kingdom
Data availability
I have checked the associated dataset, but still suggest including hyperlinks to the 23 services analysed in the main text of this paper.
Competing interests
No competing interests are declared by me as reviewer.
-
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/10210714.
This review reflects comments and contributions from Dibyendu Roy Chowdhury, Gary McDowell, Stephen Gabrielson and Ashley Farley. Review synthesized by Stephen Gabrielson.
This study explores the emerging field of preprint review services, which aim to evaluate preprints prior to journal publication, and discuss how these peer-review services might add value to scholarly communications.
Minor comments:
I think that this is a very useful and well thought through paper. Its applicability is wide ranging and as funders begin to think about implementing preprint policies it's helpful to consider the peer review and quality component. This gives funders more opportunity to support and implement …
This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/10210714.
This review reflects comments and contributions from Dibyendu Roy Chowdhury, Gary McDowell, Stephen Gabrielson and Ashley Farley. Review synthesized by Stephen Gabrielson.
This study explores the emerging field of preprint review services, which aim to evaluate preprints prior to journal publication, and discuss how these peer-review services might add value to scholarly communications.
Minor comments:
I think that this is a very useful and well thought through paper. Its applicability is wide ranging and as funders begin to think about implementing preprint policies it's helpful to consider the peer review and quality component. This gives funders more opportunity to support and implement use of these tools/organizations. I added a few comments where I think the broader preprint landscape or discussion could be considered.
I think there is an opportunity in the introduction to reference several of the recent studies and surveys conducted to investigate the attitude towards preprints in specific fields. It would also be helpful to have a longer, clearer definition of what the authors mean by preprint review services - particularly because I can see that eLife maybe wasn't included because it may be considered a journal that reviews preprints for journal publication, rather than a service reviewing preprints separate from curation.
In Figure 1, I particularly appreciated highlighting users beyond the traditional scholarly/academic community. I would like to suggest incorporating some related concepts across benefits the other groups - for example, for authors, there is the clear current advantage of cost, and those who are independent researchers, or have less funding available for publication, can use this to disseminate work in a way that is recognized by other scholars as "legitimate".
Also in regard to Figure 1, I don't necessarily think that these are "new forms" of peer review, since peer review still looks like a peer review, but maybe "new sources" or "new opportunities". This might be too radical to include but I can't help but think that this can be a way to review and disseminate information outside of the traditional system. A benefit for authors can be "not having to participate in the traditional publishing enterprise".
The logic of not using the term "peer" for platforms that review preprints makes sense. Did the authors consider removing "peer" altogether and comparing "preprint review services" with "journal-based review services"? Looking at this particularly from the lens of the Equity and Inclusion School, the definition of "peer" can be critiqued in the journal system much as the very valid rationale given here. My concern is that it "others" the preprint review as "not peers". This is just a minor comment/semantics discussion.
The term 'preprint review services' is well-defined and differentiated from 'journal-based peer review'. Additional clarification on the specific criteria used to select the services would be great.
In the third paragraph of section 3 "Overview of preprint review services", the authors describe how with some preprint review services, the "selection of reviewers does not depend on the editor's decision only". Could this be articulated very explicitly - does this mean anyone who is interested can: show up, review, and post their review of the preprint? Or are their nuances? I find this confusing with the next sentence. For example, on PREreview I can review preprints with no-one's "permission", but for some of these it sounds like there is some "gatekeeping" of who's review gets posted. Also, the last sentence of this paragraph on self-nomination of reviewers could be expanded. In light of my comments about how self-selection works - perhaps a clear articulation of how this differs from journal processes (e.g. just emailing an editor/the journal to ask to review?) would help.
In section 3 "Overview of preprint review services", PreLights is called out for investing in reviewer training. PREreview does a lot of this as well and I would call it out too.
I think that there is space in this paper to include a few of the studies conducted on assessing the differences between preprints and the journal version of record. While the community is quite concerned with quality control the data is showing that this concern may be a bit unfounded. Of course, there are many caveats, but I think it's important to highlight.
When referencing Twitter, I'm not sure how important it is to say "X previously known as Twitter"?
Section 4.1 mentions the peer review crisis. It might be important to state what the current peer review crisis is.
Section 4.4 talks about reviewer incentives – I would also be interested in a discussion on the incentives that institutions may be creating to entice faculty to do preprint reviews. Is there anything from the National Academies or HELIOS that mention how institutions can encourage preprint review? There is the "Statement on peer reviewed publications" from cOAlition S that might be worth calling out? https://www.coalition-s.org/statement-on-peer-reviewed-publications/
Should there be a reference to the recent name change of Rapid Reviews: COVID-19 to Rapid Reviews: Infectious Diseases?
In paragraph 6 of section 4.4 ORCID is mentioned. The authors might consider expanding on the ORCID discussion here, to push ORCID for better recognition of some of these other review services so it can be included on the researcher's record. How many of these services are interoperable with ORCID? How can we improve the ability to write preprint reviews to ORCID records?
Section 6 is about how preprint review services fit into the publishing landscape. If eLife doesn't fit into the author's definition of a preprint review service, would it still be worthwhile to mention eLife here as a journal publisher who has taken a new publishing approach to preprints and preprint review?
Section 6 also includes a discussion on how preprint review services might be seen to add more complexity to the peer review system. I think that with more time they will have the opportunity to show that the journal system is no longer fit for this purpose.
Comments on reporting:
Very much appreciate the availability of the data and the detailed methods. I feel like it would be easy to reproduce or compliment this study as more services become available.
Suggestions for future studies:
This has a particular use in education about publishing and preprints - combined with the previous four schools of thought paper, it gives a useful framing for teaching about scholarly publication, and so may be useful to look at in the context of training and transparency broadly in science education/professional development
I would love to see a follow-up with a focus on the cost of running these services.Twenty-three options is great in an era of experimentation but I have to think (with my funder hat on) that these may not be sustainable financially for the long term. There also might be greater opportunities for combining efforts.
Competing interests
The author declares that they have no competing interests.
-