Emergent modularity in large language models: Insights from aphasia simulations

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent large language models (LLMs) have demonstrated remarkable proficiency in complex linguistic tasks and have been shown to share certain computational principles with human language processing. However, whether LLMs’ internal components perform distinct functions, like semantic and syntactic processing in human language systems, remains unclear. Here, we systematically disrupted components of LLMs to simulate the behavioral profiles of aphasia—a disorder characterized by specific language deficits resulting from brain injury. Our findings showed that lesioning specific components of LLMs could replicate behaviors characteristic of different aphasia subtypes. Notably, while semantic deficits as those observed in Wernicke’s and Conduction aphasia, were relatively straightforward to simulate, reproducing syntactic and lexical impairments, as seen in Broca’s and Anomic aphasia, proved more challenging. Together, these results highlight both parallels and discrepancies between emergent modularity in LLMs and the human language system, providing new insights into how information is represented and processed in artificial and biological intelligence.

Article activity feed