How transparent and reproducible are studies that use animal models of opioid addiction?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The reproducibility crisis in psychology has caused other fields to consider the reliability of their own findings. Many of the unfortunate aspects of research design that undermine reproducibility also threaten translation potential (Fergusson et al., 2019). In preclinical addiction research, rates of translation have been disappointing (Schulz et al., 2016). We tallied indices of transparency and accurate and thorough reporting in animal models of opioid addiction from 2019 to 2023. By examining the prevalence of these practices, we aimed to understand whether efforts to improve reproducibility are relevant to this field. For 255 articles, we report the prevalence of transparency measures such as preregistration, registered reports, open data, and open code, as well as compliance to the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. We also report rates of bias minimization practices (randomization, masking, and data exclusion), sample size calculations, and multiple corrections adjustments. Lastly, we estimated the accuracy of test statistic reporting using statcheck. All the transparency measures and the ARRIVE guideline items had low prevalence, including no cases of study preregistration and no cases where authors shared their analysis code. Similarly, levels of bias minimization practices and sample size calculations were unsatisfactory. In contrast, adjustments for multiple comparisons were implemented in most articles (76.5%). Lastly, statcheck suggested that 11% contained statistical significance errors, and half contained more minor p-value inconsistencies. We recommend that researchers, journal editors, and others take steps to improve study reporting, to facilitate both replication and translation.

Article activity feed