Stop using d′ and start using da: Part I. Simulation explorations of single- and multi-point recognition measures of sensitivity

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this article, we address the seemingly high prevalence of false discoveries in recognition-memory research. Our challenge is to find a valid measure that effectively separates the contribution of sensitivity (accuracy) from that of bias. A stark realization emerged through Monte Carlo simulations, that in a myriad of tasks, sensitivity is confounded with bias. This is true for tasks that involve binary judgments for single items that are presented at test (Rotello et al., 2008). As a solution, we propose a version of a lesser-known measure, d-sub-a (da). Through comprehensive Monte Carlo simulations, we systematically evaluated the validity of common measures Pr = HR - FAR, A′, and d′, alongside da. We randomly sampled signals from Lure and Target distributions and used t-tests to compare iso-sensitive conditions differing in bias, across thousands of simulations iterations. For valid measures, significant results should be found at a rate of approximately 5%. We investigated the influence of different parameters, including the form of the distributions, their variability, the distance between them, the placement of response criteria, the sample size and the number of trials. Results revealed that common measures exhibited alarmingly high false discovery rates, exceeding 5%. The rates rose to 100% with larger sample sizes and a large number of trials. In contrast, with only a few minor exceptions, da was not affected by changes in bias. Our findings support the notion that da should be adopted as the default measure of sensitivity.

Article activity feed