Cortical knowledge structures guide word concept learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Human word-concept learning transcends simple associations between a word and referent exemplars, leveraging prior knowledge to generalize from few exemplars. Although Bayesian models explain such behavior, their neural underpinnings for prior structures and computations remain unclear. This study introduces a Neural Bayesian Model (NBM) to elucidate how prior knowledge representations guide new word learning. Using functional magnetic resonance imaging, we first measured the participants’ neural activity during viewing familiar objects (and novel shapes as controls) to construct the neural prior space, and then the neural activity as participants learned new words associated with some of these visual stimuli. The NBM, which integrates neural representational priors derived from activities in ventral occipitotemporal cortex (VOTC), predicted new word neural representations and generalization behavior in learning with familiar objects, outperforming control models lacking neural priors. Conversely, hippocampal activity, not necessarily explained by the NBM, underpinned learning with novel shapes, reflecting a prior-free mechanism. Comparisons with large language models (LLMs) revealed LLMs’ inferior alignment with human generalization, underscoring gaps in grounding word learning in nonverbal priors. These findings dissociate neural computational systems for concept learning: the VOTC mediates prior-based Bayesian inference, whereas the hippocampus supports exemplar-based associations. The results bridge computational theories of word learning with neural mechanisms, highlighting the dynamic interplay of semantic and episodic memory, and further promoting the incorporation of Bayesian-based learning mechanisms for LLM development.

Article activity feed