The Automated Screening Working Groups is a group of software engineers and biologists passionate about improving scientific manuscripts on a large scale. Our members have created tools that check for common problems in scientific manuscripts, including information needed to improve transparency and reproducibility. We have combined our tools into a single pipeline, called ScreenIT. We're currently using our tools to screen COVID preprints.
|List name||Number of articles||Updated on|
The Automated Screening Working Groups is a group of software engineers and biologists passionate about improving scientific manuscripts on a large scale. Our goal is to process every manuscript in the biomedical sciences as it is being submitted for publication, and provide customized feedback to improve that manuscript. Our members have created tools that check for common problems in scientific manuscripts, including information needed to improve transparency and reproducibility. We have combined our tools into a single pipeline, called ScreenIT.
Papers are screened by seven automated screening tools, which are described in the table below. While automated screening is not a replacement for peer review, tools can help to identify common problems. Examples include failing to state whether experiments were blinded or randomized, not reporting the sex of participants, or misusing bar graphs to display continuous data. Automated tools aren’t perfect – they make mistakes, they can’t tell whether a paper will be reproducible, and they can’t always determine whether a particular criterion is relevant for a given manuscript. Despite these limitations, tools may be useful in drawing readers’ attention to factors that are important for transparency, rigor and reproducibility. We hope that by providing fast, customized feedback, our tools may help preprint authors to improve reporting prior to publication.
Tools used to screen preprints
|Tool||Screens For||Link & RRID|
|SciScore||Blinding, randomization, sample size calculations, sex/gender, ethics & consent statements, resources, RRIDs||http://sciscore.com RRID:SCR_016251|
|ODDPub||Open data, open code||https://github.com/quest-bih/oddpub RRID:SCR_018385|
|Limitation-Recognizer||Author-acknowledged limitations||https://github.com/kilicogluh/limitation-recognizer RRID:SCR_018748|
|Barzooka||Bar graphs of continuous data||https://quest-barzooka.bihealth.org RRID:SCR_018508|
|JetFighter||Rainbow color maps||https://jetfighter.ecrlife.org RRID:SCR_018498|
|Trial Registration Number Screener||Confirms accuracy of clinical trial registration numbers registered in ClinicalTrials.gov||https://github.com/bgcarlisle/TRNscreener RRID:SCR_019211|
|scite||Retracted citations or citations of papers with erratums||https://www.scite.ai RRID:SCR_018568|
|rtransparent||Protocol registration, conflict of interest disclosures, funding disclosures||https://github.com/serghiou/rtransparent RRID_SCR019276|
|Seek and Blastn8*||Correct identification of nucleotide sequence||http://scigendetection.imag.fr/TPD52/ RRID:SCR_016625|
*Some papers may be screened with Seek and Blastn, a semi-automated tool that requires human confirmation of results. Reports for Seek and Blastn are posted separately on PubPeer.
Abbreviations: RRID, research resource identifier
Sciety uses the PReF (preprint review features) descriptors to describe key elements of each Group's evaluation activities, helping readers to interpret and compare their evaluations. Learn more.
- Review requested by
- Reviewer selected by
- Not applicable
- Public interaction
- Inclusion of author response
- Review coverage
- Specific aspects
- Reviewer identity known to
- Not applicable
- Competing interests
- Not included
Learn more about our tools
Public user content licensed CC-BY-NC 4.0 unless otherwise specified.