A Quantitative Study of Inappropriate Image Duplication in the Journal Toxicology Reports

This article has been Reviewed by the following groups

Read the full article

Abstract

Inappropriate image duplication is a type of scientific error that can be detected by examining published literature. Few estimates of the frequency of this problem have been published. This study aimed to quantify the rate of image duplication in the journal Toxicology Reports . In total 1540 unique articles (identified by DOI) were checked for the presence of research related images (microscopy, photography, western blot scans, etc). Each research paper containing at least one such image was scrutinized for the presence of inappropriate duplications, first by manual review only, and subsequently with the assistance of an AI tool (ImageTwin.ai). Overall, Toxicology Reports published 715 papers containing relevant images, and 115 of these papers contained inappropriate duplications (16%). Screening papers with the use of ImageTwin.ai increased the number of inappropriate duplications detected, with 41 of the 115 being missed during the manual screen and subsequently detected with the aid of the software. In summary, the rate of inappropriate image duplication in this journal has been quantified at 16%, most of these errors could have been detected at peer review by careful reading of the paper and related literature. The use of ImageTwin.ai was able to increase the number of detected problematic duplications.

Article activity feed

  1. This Zenodo record is a permanently preserved version of a PREreview. You can view the complete PREreview at https://prereview.org/reviews/8402209.

    This review reflects comments and contributions from Martyn Rittman and Allie Tatarian. Review synthesized by Stephen Gabrielson.

    In this study, both a human reviewer and an AI tool screened papers from the journal Toxicology Reports for inappropriate image duplication.

    Major comments:

    • Having a single human reviewer at both steps (the initial review for duplicated images, and the review of AI-flagged duplications) is a potential source of bias. The safest thing to do is to have three image reviewers - two that conduct an initial review and review the AI-flagged hits, plus a third on hand to act as a tiebreaker in case the two reviewers disagree. Or since this could be a substantial …