Improving the measurement of knowledge in multiple-choice tests: A hierarchical multinomial processing tree approach

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

THIS PREPRINT IS THE INITIAL SUBMISSION OF A PAPER THAT CURRENTLY UNDERGOES PEER-REVIEW FOR PUBLICATIONIn multiple-choice (MC) tests, correct item responses can stem from actual mastery as well as from lucky guessing. Several approaches have been developed to address this problem, including the implementation of a “Don’t know” option. However, scoring such MC tests is challenging, as traditional scoring techniques are differentially biased against certain response strategies. In this article, we propose a hierarchical multinomial processing tree (MPT) model that allows disentangling the latent ability from the tendency to engage in guessing and the mere preference for a specific response option. The proposed MPT model includes crossed-random person and item effects and can be estimated in a Bayesian framework. We applied the model to an empirical dataset of a general knowledge test with a true/false response format and “Don’t know” option. The model showed good fit to the data and revealed considerable variability in model parameters across both persons and items. Moreover, we found evidence of improved measurement of knowledge in the form of higher convergent validities compared to conventional scoring techniques. These results highlight the usefulness of applying hierarchical MPT modeling to psychometric questions. We end the article with a discussion of differences to related models, implications for measurement practice, as well as limitations of the model and future research directions.

Article activity feed