Contextual Assembly of Lexical Functions in Large Language Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Neural network modeling has played a central role in psycholinguistic studies of lexical processing, but the recent advent of large language models (LLMs) offers a different approach that may yield new insights into the mental lexicon. Four chatbots are prompted across three experiments to test how their LLMs generate psycholinguistic ratings of words in comparison with humans. LLM ratings, averaged across varying list contexts, were found to be highly correlated with human ratings, and differences in correlation strengths were partly explained by differences in rating ambiguity. LLM context manipulations strengthened correlations with human ratings through better calibration, and variability in LLM ratings was correlated with human inter-rater variability. We conclude that LLMs used context to guide the human-like assembly of psycholinguistic rating functions, rather than recalling ratings from training data. Additional results from testing LLM generation of word naming latencies showed that function assembly is currently limited by patterns of co-occurrence in textual data. Patterns at more fine-grained timescales are needed in training data to model online lexical processes.

Article activity feed