Mirror manifolds: partially overlapping neural subspaces for speaking and listening
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Participants in conversations need to associate words with their speakers but also retain those words’ general meanings. For example, someone talking about their hand is not referring to the other speaker’s hand, but the word “ hand ” still carries speaker-general information (e.g., having five fingers). These two requirements impose a cross-speaker generalization / differentiation dilemma that is not well addressed by existing theories. We hypothesized that the brain resolves the dilemma by use of a vectorial semantic code that blends collinear and orthogonal coding subspaces. To test this hypothesis, we examined semantic encoding in populations of hippocampal single neurons recorded during conversations between epilepsy patients and healthy partners in the epilepsy monitoring unit (EMU). We found clear semantic encoding for both spoken and heard words, with strongest encoding just around the time of utterance for production, and just after it for reception. Crucially, hippocampal neurons’ codes for word meaning were poised between orthogonalized and collinearized. Moreover, different semantic categories were orthogonalized to different degrees; body parts and names were most differentiated between speakers; function words and verbs were least differentiated. Finally, the hippocampus used the same coding principle to separate different partners in three-person conversations, with greater orthogonalization between self and other than between two others. Together, these results suggest a new solution to the problem of binding word meanings with speaker identity.