Statistical Inference Regularized with Neural Networks and its Application in Psychometrics

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Likelihood-based methods such as marginal maximum likelihood and Bayesian estimation via Markov chain Monte Carlo (MCMC) are widely used for estimating item response theory (IRT) models and perform well in large samples. In small samples, however, these approaches may yield unstable estimates due to insufficient information in the likelihood, unless strong prior distributions are imposed. We address this problem by turning to the fundamentals of estimation via decision theory, showing that small-sample estimation can be viewed as an ill-posed decision problem in the sense that the decision function is sensitive to small changes in the data and, therefore, may not be continuous. We discuss how such ill-posed decision problems—both in IRT estimation and more broadly—can be regularized using neural network (NN)-based approaches that build a continuous map from observations into parameters. We formalize this approach within a general framework termed regularized decision theory and apply it to IRT models. Focusing on the 3PL, we study its properties via simulations.

Article activity feed