An Explainable AI Handbook for Psychologists: Methods, Opportunities, and Challenges

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With more researchers in psychology using machine learning to model large datasets, many are also looking to eXplainable AI (XAI) methods to understand how their model works and to gain insights into the most important predictors. However, the methodological approach for establishing predictor importance in a machine learning model is not as straightforward or as well-established as with traditional statistical models. Not only are there a large number of potential XAI methods to choose from, but there are also a number of unresolved challenges when using XAI to understand psychological data. This article aims to provide an introduction to the field of XAI for psychologists. We first introduce explainability from an applied machine learning perspective and contrast it to that in psychology. Then we provide an overview of commonly used XAI approaches, namely permutation importance, impurity-based feature importance, Individual Conditional Expectation (ICE) graphs, Partial Dependence Plots (PDP), Accumulated Local Effect (ALE) graphs, Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Deep Learning Important FeaTures (DeepLIFT). Finally, we demonstrate the impact of multicollinearity on different XAI methods using a simulation analysis and discuss the implementation challenges and future directions in psychological research.

Article activity feed