Diverging views on the role of structured representations in linguistics and language modeling

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent years have seen dramatic advances in the linguistic abilities of language models, leading some authors to argue that they can be treated as models of language. Against this background, we provide a cognitive perspective on the place of language models in the science of language, with a particular focus on the role of structured representations. Evidence from linguistics, psychology and cognitive neuroscience shows that structured representations occupy a central place in the human mind. Structured representations constrain how linguistic forms map onto meanings, thereby determining the boundaries of possible human languages. In language modeling, instead, the place of structured representations is often peripheral. They are seen as optional end point of modeling, to be induced as a consequence of the objective the model tries to optimize – that is, next-word prediction. Because structured relations and representations are not necessarily useful for predicting the next word in a sequence, it is not guaranteed that the model’s internal organization will come to encode those representations. Indeed, contemporary language models struggle to use structured representations functionally to map form onto meaning, and to determine whether they should consider their input as a possible human language or not. Building on classic insights in linguistics and (computational) cognitive science, we suggest that any use of language models as models of language requires conceptualizing the role of structured representations as computational objects.

Article activity feed