Higher-order Sonification of the Human Brain
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Sonification, the process of translating data into sound, has recently gained traction as a tool for both disseminating scientific findings and enabling visually impaired individuals to analyze data. Despite its potential, most current sonification methods remain limited to one-dimensional data, primarily due to the absence of practical, quantitative, and robust techniques for handling multi-dimensional datasets. We analyze structural magnetic resonance imaging (MRI) data of the human brain by integrating two- and three-point statistical measures in Fourier space: the power spectrum and bispectrum. These quantify the spatial correlations of 3D voxel intensity distributions, yielding reduced bispectra that capture higher-order interactions. To showcase the potential of the sonification approach, we focus on a reduced bispectrum configuration which applied to the OASIS-3 dataset (864 imaging sessions), yields a brain age regression model with a mean absolute error (MAE) of 4.7 years. Finally, we apply sonification to the ensemble-averaged (median) outputs of this configuration across five age groups: 40–50, 50–60, 60–70, 70–80, and 80–100 years. The auditory experience clearly reveals differentiations between these age groups, an observation further supported visually when inspecting the corresponding sheet music scores. Our results demonstrate that the information loss (e.g., normalized mean squared error) during the reconstruction of the original bispectra, specifically in configurations sensitive to brain aging, from the sonified signal is minimal. This approach allows us to encode multi-dimensional data into time-series-like arrays suitable for sonification, creating new opportunities for scientific exploration and enhancing accessibility for a broader audience.