Systematic Measurement Error in the Intervention Outcome
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Using sum scores as outcomes in intervention studies can bias effect estimates, especially when systematic measurement error (SME) such as response styles or response shifts is present. Although both sources of SME are common in psychological assessment, their joint influence on intervention effect estimation is poorly understood. This simulation study evaluated bias across three measurement approaches: a latent variable model accounting for response style, a latent variable model ignoring both response style and response shift, and a sum-score approach. SME was operationalized as the co-occurrence of a response style induced by reversed items and intervention-induced response shifts. Response style was generated using two established bifactor mechanisms – the correlated-traits-correlated-methods minus 1 (CTC(M–1)) and random-intercept (RI) approaches, and additional conditions allowed the method factor to correlate with the growth factor. Intervention effects were estimated using second-order growth curve models for the latent variable approaches and first-order models for sum scores. Primary performance measures were mean absolute and relative bias. Under the CTC(M–1) mechanism, method factor loadings had minimal impact, whereas response shifts produced substantial bias, particularly for sum scores. Under the RI mechanism, bias in the latent variable model that ignored response style varied with the size of method factor loadings, although the overall effect of the response style remained modest. Allowing method and growth factors to correlate had little influence on bias. Across mechanisms, response shifts introduced far greater bias than response styles, and sum scores consistently performed worst. Co-occurring SME sources did not exacerbate bias.