“May the Force be with you!” – Measurement invariance and comparability of constructs across rating scale and forced choice personality questionnaires

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multidimensional Forced-Choice (MFC) response formats have been explored as an alternative to standard Rating Scale (RS) formats to combat the inherent issues of intentional and unintentional response distortions in personality assessments. Utilizing normative scoring procedures (e.g., the Thurstonian IRT approach by Brown, 2011), MFC test scores nowadays allow for interpersonal comparison, solving the issue of ipsative data derived from MFC response formats by classical scoring procedures. However, it is still not fully understood if MFC questionnaires are genuinely free of response distortions, especially socially desirable responding (SDR). The “fake-proofness” of MFC questionnaires has repeatedly been claimed while research findings still struggle to solve this question for good. Usually, MFC and RS test scores are straight forwardly compared by correlation but the findings remain ambiguous. Trying to shed additional light on this, we hypothesize that, given the substantial differences in the item format and the response process between both response formats, the underlying constructs measured could differ and explain the so far still inconclusive results, when test scores are compared to examine comparative psychometric qualities. It is known that the effects of SDR in RS questionnaires can be modelled in a bifactor accumulating variance specifically attributed to SDR. If MFC questionnaires are truly freed from the effects of SDR, a similar bifactor or method factor should not exist in MFC data. However, its presence has not yet been explored for MFC formats. To test the comparability of the captured constructs and the coverage of the same latent space, we conducted multi-group confirmatory factor analyses, lending from the framework of measurement invariance, across two separate samples from the same questionnaires’ MFC and RS version. We analyzed the factor scores derived from the identical hierarchical Big Five domain and facet models of a sample of N = 398 (RS) and N = 547 (MFC). Results demonstrate the existence of a bifactor or method factor in both MFC and RS formats which, however, substantially differs between both formats. The differential nature of this bifactor is explored and discussed further regarding its potential origin, influence on test scores, and the comparability of constructs across MFC and RS response formats. Keywords: multidimensional forced choice, Thurstonian IRT, construct comparison, measurement invariance

Article activity feed