Human Category Learning in Reversal Tasks: Dynamic Learning Rates Help Overcome Catastrophic Interference in Connectionist Networks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
During category learning tasks participants often group together, i.e., “commonly code,” within-category exemplars. In two experiments, human participants learned to classify fractal exemplars into two arbitrarily-defined categories. After an initial learning phase, we reversed the exemplar-category assignments in two ways: a Total-Reversal task (all assignments were switched) and a Partial-Reversal task (only half were switched). If common coding occurs, participants should learn the Total-Reversal faster than the Partial-Reversal and should show within-category interference in the Partial-Reversal task (between reversed and nonreversed exemplars). Our behavioral results supported these predictions. Using multidimensional scaling methods, an additional experiment determined that our fractal exemplars were grouped through category learning rather than perceptual processes. To understand the mechanisms behind these findings, we modeled category learning with a 3-layered artificial neural network (ANN). We observed that such models can readily form category representations, but suffer from catastrophic interference during reversal training. To overcome this problem, we modified the network by applying different dynamic learning rates to its different layers. Inspired by associative learning theories (Mackintosh, 1975; Pearce and Hall, 1980), we incorporated a mechanism that promotes slower weight changes in the input-hidden layer and faster changes in the hidden-output layer as the ANN learns the task. This modified ANN successfully simulated our empirical findings, demonstrating that applying differential dynamic learning rates can help prevent catastrophic interference problems and provide a mechanistic explanation for common coding effects. Our study highlights the importance of associative mechanisms in category learning.