Usability assessment of point-of-care diagnostics for infectious diseases in low-resource settings: a scoping review of current practices
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background Usability is a critical determinant of successful implementation of diagnostic technologies, particularly for point-of-care (POC) and self-testing tools intended for resource-limited settings. Despite its importance, no prior review has systematically examined how usability is evaluated in diagnostic test development and implementation. This scoping review synthesizes current methods and practices used to assess usability of infectious disease diagnostics. Methods We conducted a scoping review following PRISMA-ScR and JBI guidance. Eligible studies reported usability evaluations of molecular or immunoassay-based diagnostics intended for decentralized or low-resource settings. Searches were performed in five databases and nine additional sources, including the WHO Prequalification of In Vitro Diagnostics registry. Data were extracted on study characteristics, user groups, settings, evaluation methods, sampling strategies, and reported usability outcomes. Results We identified 103 studies, most focused on HIV, COVID-19, malaria, or hepatitis C, and conducted in a limited number of countries. Self-testing evaluations generally used larger samples and assessed more outcome domains than those involving professional users; however, sample size justification was rare and participant selection methods were often unclear. Most studies relied on non-standardized questionnaires, with few using validated instruments or qualitative approaches. Usability outcomes most commonly addressed ease-of-use and effectiveness, while domains such as safety, memorability, and satisfaction were less consistently assessed. WHO prequalification dossiers provided minimal methodological detail. Synthesizing regulatory guidance with review findings, we developed a usability assessment framework comprising core domains (effectiveness, efficiency, errors and use safety), complementary domains (learnability, memorability, satisfaction), and contextual domains capturing environmental and system-level factors. Conclusions Substantial methodological heterogeneity exists in usability assessments of diagnostic tests. Standardized outcome definitions, broader methodological approaches, and improved reporting are needed to strengthen usability evidence for implementation. A Delphi consensus process is planned to define core usability outcomes and recommended methodologies for diagnostic evaluation.