Neural Networks Can Capture Human Concept Learning when Trained with Symbolic Priors

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

People can learn new concepts from a small number of examples by drawing on inductive biases that favor some hypotheses over others. These inductive biases have previously been captured by Bayesian models that use symbolic representations such as logical formulas to define prior distributions over hypotheses. We show that it is possible to create an artificial neural network that displays the same inductive biases as symbolic Bayesian models. Our approach is based on distilling a prior distribution from a symbolic Bayesian model into a neural network via meta-learning, a method for extracting the common structure from a set of tasks. This approach is used to create an artificial neural network with an inductive bias towards concepts expressed as short logical formulas. Analyzing results from previous behavioral experiments in which people learned logical concepts from a few examples shows that neural networks trained via meta-learning are able to capture human performance. These results demonstrate that it is possible to create artificial neural networks that capture types of inductive biases that have traditionally been expressed using symbolic models, providing a way to link accounts of human cognition at different levels of analysis.

Article activity feed