Assessing Measurement Invariance Across Multiple Populations Using Multigroup Confirmatory Factor Analysis (MG-CFA)

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Ensuring the validity and reliability of psychometric instruments across diverse populations is a critical concern in psychological and educational measurement. This study examined measurement invariance using Multigroup Confirmatory Factor Analysis (MG-CFA) to determine whether a psychological scale functioned equivalently across multiple population groups. The study followed a systematic approach, beginning with preliminary data screening and assumption testing, which revealed that the dataset contained less than 2% missing values, which were addressed using multiple imputation techniques. The data met normality assumptions, as skewness and kurtosis values were within the acceptable range of ± 2.0 (Kline, 2016). Confirmatory Factor Analysis (CFA) was conducted separately for each group, with model fit indices indicating good model fit (CFI = 0.952, RMSEA = 0.041, SRMR = 0.037), confirming the validity of the baseline model. Results from measurement invariance testing indicated that configural invariance was supported (CFI = 0.954, RMSEA = 0.042, SRMR = 0.035), suggesting that the factor structure was consistent across groups. Metric invariance was achieved (ΔCFI = 0.006, ΔRMSEA = 0.003), demonstrating that factor loadings were equivalent across populations, confirming that the scale measured the construct similarly. Scalar invariance was also established (ΔCFI = 0.008, ΔRMSEA = 0.004), allowing for meaningful latent mean comparisons across groups. Further analysis revealed statistically significant differences in latent means across the studied populations. Group 2 exhibited a significantly lower latent mean (-0.35, p = 0.002), while Group 3 had a higher latent mean (0.20, p = 0.015) compared to the reference group. These findings suggest variability in the measured construct, which has implications for the interpretation of assessment results across different groups. The findings have theoretical and practical implications for psychological assessment, educational evaluation, and policy formulation. The confirmation of measurement invariance ensures that comparisons made using the scale are valid and free from measurement bias, which is particularly relevant for cross-cultural research, standardized testing, and clinical assessments. Additionally, this study highlights the need for researchers and practitioners to incorporate measurement invariance testing as a standard practice in psychometric validation studies. The study concludes with recommendations for the development and refinement of psychometric instruments, the use of MG-CFA in large-scale assessments, and capacity building in statistical techniques for researchers and practitioners. These findings contribute to the broader field of measurement theory by reinforcing the importance of rigorous validation techniques in ensuring fairness and accuracy in psychological and educational assessments.

Article activity feed