Pathways to semantic integration: Wordform similarity and valence accelerate novel word learning from co-occurrence regularities
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Learning new words and flexibly using them in novel contexts is a hallmark of human language and communication. Yet, the mechanisms through which new words are integrated into a rich mental lexicon remain relatively understudied. A large body of behavioral and computational research suggests that individuals learn new words by attending to distributional regularities in natural language. Recent work has also found that systematic form-to-meaning mappings and contextual valence can facilitate word learning, although how these sources interact with distributional information is not well understood. In this work, across five pre-registered experiments, participants encountered novel words embedded in sentences where they reliably co-occurred with known words. The strength of learned associations was assessed relative to pre-existing associates, as well as under conditions involving wordform overlap and salient affective contexts. The findings revealed that statistical regularities do indeed support the formation of novel associations, but these associations are only partially integrated into the semantic network. Moreover, wordform overlap between novel and known words facilitated associative learning not only for words that directly co-occurred with the novel items, but also for words that were never directly paired but reliably co-occurred with the same novel items. Valence served as a powerful cue for initial learning and also supported the extension of novel word meanings to new contexts in a production task, thus providing evidence for meaningful generalization. Taken together, these results underscore the non-arbitrary nature of language and emphasize the critical role of affect and wordform in leveraging statistical regularities to support novel word learning.