Code review in practice: A checklist for computational reproducibility and collaborative research in ecology and evolution
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Ensuring that research, along with its data and code, is credible and remains accessible is crucial for advancing scientific knowledge—especially in ecology and evolutionary biology, where the climate crisis and biodiversity loss accelerate and demand urgent, transparent science. Yet, code is rarely shared alongside scientific publications, and when it is, unclear implementation and insufficient documentation often make it difficult to use. Code review—whether as self-assessment or peer review—can improve key aspects of code quality: reusability, i.e., ensuring technical functionality and that the code is well-documented, and validity, i.e., ensuring the code implements the intended analyses faithfully. While assessing validity requires domain expertise for methodological assessment, code review for reusability can be conducted by anyone with basic understanding of programming practices. Here, we introduce a checklist-based, customisable approach to code review that focuses on reusability. Informed by best practices in software development and recommendations from commentary pieces and blog posts, the checklist organises specific review prompts around seven key attributes of high-quality reusable scientific code: Reporting, Running, Reliability, Reproducibility, Robustness, Readability, and Release. By defining and structuring these principles of code review and turning them into a practical tool, our template guides through a systematic evaluation that is also flexible to be tailored to specific needs. This includes providing researchers with a clear path to proactively improve their own code. Ultimately, this approach to code review aims to reinforce reproducible coding practices, and strengthens both the credibility and collaborative potential of research.