Navigating unmeasured confounding in non-experimental psychological research: A practical guide to computing and interpreting E-value
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Randomized experiments remain the gold standard for establishing causality, yet ethical and practical constraints in certain fields often require researchers to rely on observational data. Although psychologists recognize that correlation does not imply causality, the conventional cautionary statements regarding correlation typically found at the end of articles have not sufficiently advanced psychological science, particularly in subfields such as developmental and personality psychology that predominantly rely on observational data. Sensitivity analyses commonly used in biostatistics and epidemiology offer powerful tools to quantify the risk of unmeasured confounding in observational data analysis, essentially encouraging applied researchers to assess how strongly an unmeasured confounder must be associated with both the predictor and outcome in order to negate an observed predictor-outcome association (i.e., reduce the effect to null). This tutorial explores the frequently overlooked but critical issue of unmeasured confounding in psychological research and introduces psychologists to the E-value, a novel and straightforward method for assessing the robustness of exposure-outcome associations to unmeasured confounding. We demonstrate the application of E-value using common psychological research scenarios in R and discuss its strengths, limitations, and recommended best practices. Psychologists can more accurately assess and transparently report research findings, particularly in subfields relying primarily on observational data, by more explicitly considering unmeasured confounding and incorporating sensitivity analysis techniques like the E-value into their methodological toolkits.