Investigating the Measurement Precision of the Montreal Cognitive Assessment (MoCA) for Cognitive Screening in Parkinson’s Disease Through Item Response Theory

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: The Montreal Cognitive Assessment (MoCA) is widely used to evaluate global cognitive function; however, its precision in measurement in heterogeneous populations—especially among patients with Parkinson’s disease (PD)—remains underexplored. Methods: In this multicenter cross-sectional study, we examined the psychometric properties of the Brazilian Portuguese MoCA in 484 PD patients (age range, 26–90 years; mean ± SD, 59.9 ± 11.1 years; disease duration range, 1–35 years; mean ± SD, 8.7 ± 5.4 years) using Item Response Theory (IRT). The Graded Response Model (GRM) was employed to estimate item difficulty and discrimination parameters, and differential item functioning (DIF) concerning age and education was investigated via a Multiple Indicators Multiple Causes (MIMIC) model. Results: The MoCA demonstrated essential unidimensionality and robust model fit. GRM analyses revealed that items within the Attention and Naming domains had high discrimination, indicating sensitivity to subtle cognitive deficits, while Memory items exhibited lower discrimination. Orientation items showed low difficulty thresholds, suggesting a propensity for ceiling effects. The MIMIC model further indicated that age and education significantly influenced overall scores: increasing age was associated with lower performance, whereas higher educational attainment correlated with better outcomes, particularly in Memory Recall and Executive/Visuospatial domains, even after accounting for their modest inverse relationship. Conclusions: Our findings support the validity of the Brazilian Portuguese MoCA for cognitive screening in PD while highlighting item-level biases linked to age and education. These results advocate for using education-adjusted norms and computerized scoring algorithms that incorporate item parameters, ultimately enhancing the reliability and fairness of cognitive assessments in diverse clinical populations.

Article activity feed