An Explainable AI Handbook for Psychologists: Methods, Opportunities, and Challenges

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With more researchers in psychology using machine learning to model large datasets, many are also looking to eXplainable AI (XAI) to understand how their model works and to gain insights into the most important predictors. However, the methodological approach for explaining a machine learning model is not as straightforward or as well-established as with inferential statistical models. Not only are there a large number of potential XAI methods to choose from, but there are also a number of unresolved challenges when using XAI to understand psychological data. This article aims to provide an introduction to the field of XAI for psychologists. We first introduce explainability from a machine learning perspective and consider what makes a good explanation in different settings. An overview of commonly used XAI approaches and their use cases is then provided. We categorize methods along two dimensions: model-specific vs. model-agnostic and producing local vs. global explanations. We then highlight and discuss some of the practical challenges that psychologists can encounter when using XAI metrics to understand predictor importance. Namely, how to attribute importance when there are dependencies between features, when there are complex (non-linear) interactions, and/or multiple possible solutions to the prediction problem.

Article activity feed