Stop using d′ and start using da: Part II. Empirical Recognition Memory Data Reveal Type-I Error Rates of Different Sensitivity Measures

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The selection of an accuracy (sensitivity) measure is a pivotal decision faced by recognition-memory researchers. Despite an abundance of measures developed over the years with the intention of measuring sensitivity independent of participants' bias, Monte Carlo simulations found all commonly used sensitivity measures to be invalid because they are confounded with bias (Rotello et al., 2008; Levi et al., 2024). The one valid measure was our proposed version of the signal-detection measure, d-sub-a (denoted da). Empirical confirmation in real-world experiments is critical, however, to establish the validity of any measure, including da. The goal of the current investigation was to use empirical data to test the validity of da, as well as other common sensitivity measures, as indices of sensitivity. We ran a large-scale recognition experiment, where sensitivity was not manipulated at encoding. Using implied base-rate, bias was manipulated at test. For valid measures, erroneous significant results should be observed at a rate of approximately 5% when testing iso-sensitive conditions. We repeatedly ran the experiment to derive rates of Type I errors. da, but no other measure (Pr, A′, and d′), demonstrated the desired 5% false-discovery rate across different sample sizes. Together with the results of our simulations, these new findings provide unassailable evidence that da is a valid sensitivity measure for recognition-memory tasks, offering an easy solution to the prevailing measurement crisis in recognition memory.

Article activity feed