Shared neural geometries for bilingual semantic representations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The human brain has the remarkable ability to understand and express similar concepts in multiple languages. To understand how it does so, we examined responses of hippocampal neurons during passive listening, directed speaking, and spontaneous conversation, in both English and Spanish, in a small group of balanced bilinguals. We find putative translation neurons , whose responses to equivalent words (e.g., “ tierra ” and “ earth ”) are correlated. More broadly, however, neurons’ semantic tunings differed substantially by language, suggesting language-specific neural implementations. Despite this, distances between words’ neural responses are preserved between the two languages, creating a shared semantic geometry. That geometry was implemented by the same neurons but along distinct readout axes; this difference in readout may help prevent cross-language interference. Shared semantic geometry with distinct readout axes is also observed in mBERT, a multi-language LLM, suggesting this may be a general computational principle for multilingual representation. Together, these results suggest that hippocampus encodes a language-independent internal model for meaning.

Article activity feed