Vib2Sound: Separation of Multimodal Sound Sources

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding animal social behaviors, including vocal communication, requires longitudinal observation of interacting individuals. However, isolating individual-level vocalizations in complex environments is challenging due to background noise and frequent overlaps of coincident signals from multiple vocalizers. A promising solution lies in multimodal recordings that combine traditional microphones with animal-borne sensors, such as accelerometers and directional microphones. These sensors, however, are constrained by strict limits on weight, size, and power consumption and often lead to noisy or unstable signals. In this work, we introduce a neural network-based system for sound source separation which leverages multi-channel microphone recordings and body-mounted accelerometer signals. Using a dataset of zebra finches recorded in a social setting, we demonstrate that contact sensing largely outperforms conventional microphone-array recordings. By enabling the separation of overlapping vocalizations, our approach offers a valuable tool for studying animal communication in complex naturalistic environments.

Article activity feed