On the (Non-)Precision of Psychological Measures in Security and Privacy Research: Reliability Beyond Reporting Alpha

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this paper, we argue that the trustworthiness of empirical Security and Privacy (SP) research depends on the precision of the psychological measures it uses. Because many core variables in SP are latent, imprecise measurement does not merely add noise; it undermines effect size estimates, threatens replicability, and can render substantive interpretations invalid. We contend that current practice often reduces reliability to Cronbach’s alpha, despite its restrictive assumptions and limited suitability across study designs. We therefore call for reliability evidence that matches the intended use of scale scores: reporting McDonald’s omega for internal consistency, and establishing test-retest reliability via intra-class correlations (ICC) for designs that infer change (e.g., pre-post intervention studies). Using Monte-Carlo simulations, we illustrate how poor reliability attenuates true correlations and can even invert observed intervention effects, making conclusions about training effectiveness speculative when stability evidence is absent. We discuss practical recommendations and propose shared infrastructure for accumulating stability evidence to enable cumulative, valid inference in SP research.

Article activity feed