From Voice to Self: An Integrative Framework on Self-Voice Processing
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The self-voice plays a fundamental role in communication and identity, yet remains a relatively neglected topic in psychological science. As AI-generated and digitally manipulated voices become more common, understanding how individuals perceive and process their own voice is increasingly important. Disruptions in self-voice processing are implicated in several clinical conditions, including psychosis, autism, and personality disorders, highlighting the need for integrative models to explain self-voice across contexts. However, research faces two major challenges: a methodological one – replicating the bone-conducted acoustics that shape natural self-voice perception, and a conceptual one – a persistent bias toward treating the self-voice as purely auditory. To address these gaps, we propose a framework decomposing the self-voice into five interacting components: auditory, motor, memory, multisensory integration, and self-concept. We review the functional and neural basis of each component and suggest how they converge within distributed brain networks to support coherent self-voice processing. This integrative framework aims to advance theoretical and translational work by bridging psychology, neuroscience, clinical research, and voice technology in the context of emerging digital voice environments.