Insights into the mind using tools from Explainable AI: A practical introduction to SHAP in cognitive science

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Computational models of cognition are becoming increasingly more complex and subsequently harder to interpret. A recent exploratory method from Explainable AI, SHAP, aims at providing a simple and consistent way to explain the decisions such models make. However, to date, while very popular within Explainable AI, this method is hardly used in cognitive science. We here make a case for its utility in cognitive science, provide a non-technical introduction including detailed practical and mathematical considerations and demonstrate how this tool can be applied in cognitive modeling.

Article activity feed