Convexity (probably) makes languages efficient
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Both functional and lexical categories in natural languages are believed to be shaped by domain-general pressures for efficient information transmission and simplicity of representations. However, the origins of this simplicity-informativeness trade-off remain unclear, with previous research suggesting that it may stem from learning, communication, or both. In this paper, we propose that the domain-general bias for convexity -- a property whereby if a word applies to two objects in the conceptual space, it must also apply to all the objects between them -- fosters the optimization of this trade-off. Convexity, previously hypothesized to constrain the meanings of both content and logical words, likely reflects a fundamental property of cognitive representations shared across species, as evidenced by similar patterns in non-human animals. Using a gradual measure of convexity, shown to predict learning outcomes in artificial language learning experiments, we present evidence that color systems with the highest degree of convexity optimally balance simplicity and informativeness across different formulations of these measures. Additionally, we demonstrate that convexity is sensitive to rotations in color systems, with attested color systems exhibiting the highest degree of convexity. These findings suggest that the simplicity-informativeness trade-off emerges from the convexity of underlying representations, explaining why this trade-off is observed in both functional and content words, as lexicalization is likely constrained by convexity. Moreover, this explanation offers a more parsimonious account of why languages exhibit efficient phenomena like the simplicity-informativeness trade-off, relying solely on a single domain-general bias that applies across a broad range of linguistic categories.