Our lists

List name Number of articles Updated on
Evaluated articles 292


One of the missions of RR:C19 is to accelerate peer review of COVID-19-related research across a wide range of disciplines and deliver almost-real-time, dependable scientific information that policymakers, scholars, and health leaders can use. We do this by soliciting rapid peer reviews of time-sensitive and interesting preprints, which are then published online and linked to the preprint servers that host the manuscripts. An underlying value of RR:C19 is transparency and reliability. Each review is published online and assigned a DOI, making the work fully citable and claimable on reviewers’ ORCID and Publons accounts. Our standard practice is to publish the reviewers’ full names and affiliations, although we will allow reviewers to publish their review anonymously, upon request. RR:C19 assesses manuscripts with the same level of rigour as found for top journals. Assessments include how well the paper’s conclusions are substantiated by the research, graded on the RR:C19 Strength of Evidence Scale, with supporting comments.

Evaluation model

  • Strong: The main study claims are very well-justified by the data and analytic methods used. There is little room for doubt that the study produced has very similar results and conclusions as compared with the hypothetical ideal study. The study’s main claims should be considered conclusive and actionable without reservation.

  • Reliable: The main study claims are generally justified by its methods and data. The results and conclusions are likely to be similar to the hypothetical ideal study. There are some minor caveats or limitations, but they would/do not change the major claims of the study. The study provides sufficient strength of evidence on its own that its main claims should be considered actionable, with some room for future revision.

  • Potentially informative: The main claims made are not strongly justified by the methods and data, but may yield some insight. The results and conclusions of the study may resemble those from the hypothetical ideal study, but there is substantial room for doubt. Decision-makers should consider this evidence only with a thorough understanding of its weaknesses, alongside other evidence and theory. Decision-makers should not consider this actionable, unless the weaknesses are clearly understood and there is other theory and evidence to further support it.

  • Not informative: The flaws in the data and methods in this study are sufficiently serious that they do not substantially justify the claims made. It is not possible to say whether the results and conclusions would match that of the hypothetical ideal study. The study should not be considered as evidence by decision-makers.

  • Misleading: Serious flaws and errors in the methods and data render the study conclusions misinformative. The results and conclusions of the ideal study are at least as likely to conclude the opposite of its results and conclusions than agree. Decision-makers should not consider this evidence in any decision.


Sciety uses the PReF (preprint review features) descriptors to describe key elements of each Group's evaluation activities, helping readers to interpret and compare their evaluations. Learn more.

Review requested by
Reviewer selected by
Editor, service, or community
Public interaction
Inclusion of author response
Other scale or rating
Review coverage
Complete paper
Reviewer identity known to
Editor or service
Competing interests


We are prototyping a new type of peer curation network consisting of graduate students and field specialists. This cohort identifies relevant pre-print content for peer review, assisted by a new Natural Language Processing tool developed by COVIDScholar, an initiative of UC Berkeley and Lawrence Berkeley National Lab.

Read more about Rapid Reviews COVID-19.

Content license

Content is licensed under the Creative Commons CC BY-NC 4.0 unless otherwise specified.