Software Quality Indicators: extraction, categorisation and recommendations from canonical sources

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Research software plays a central role in modern science, and its quality is increasingly recognized as essential for reproducibility, sustainability, and trust. Numerous initiatives have proposed indicators to guide quality assessment, yet these indicators are dispersed across domains and vary in scope, terminology, and practical use. This work presents a curated catalogue of software quality indicators tailored to the needs of research software. Developed during BioHackathon Europe 2024 and refined in collaboration with the ELIXIR Tools Platform and EVERSE project, the catalogue consolidates and structures indicators from a range of authoritative sources.Over 300 indicators were gathered and systematically reviewed for relevance, clarity, and implementation feasibility. Each was classified into thematic categories—such as Documentation, Security, Usability, and Sustainability—and annotated with target applicability, ease of evaluation, and recommended actions. Redundant, overly abstract, or narrowly scoped indicators were excluded or flagged, while additional tags highlighted cross-cutting concerns such as licensing, testing, and community practices.The resulting open dataset, available as a structured spreadsheet, includes detailed metadata and decision criteria to support reuse, adaptation, and extension. The catalogue offers a foundation for context-specific assessment frameworks. Intended users include research software developers and maintainers, evaluators, and developers of quality-focused tools and guidelines

Article activity feed