Kinetic Audio-Visualizers in Immersive Music Experiences for Hearing-Impaired Listeners

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent advancements in GLSL-based compute shaders that perform parallelized particle simulations and spectral mapping have contributed to the rise of audio-visualizers for mainstream use. This is primarily due to their non-code programming customizability and multimodal input accessibility. Stem-separation features possible in softwares such as Touchdesigner, Ableton Live, Max/MSP, and more pose the question of whether these audio-visualizers could bridge the physiological gap for DHH individuals in sonic environments. Using three different tests: lyric detection, post-listening visual correspondence, and heart rate synchronization, this study empirically serves to find practical evidence, when combined with biological, neurological, and cognitive research in music, on the extent to which audio-visualizers bridge this gap. It was determined through these three tests that yes, participants engaged with the dynamic variety of songs at an increased rate when provided access to the Touchdesigner particle displacement audio-visualizer, attributing perceptual and attentional changes to the multiple distinguishable aspects within the visualizer. I dive into how these findings might be applicable to various music environments that want to increase engagement to DHH individuals, or how these individuals might go about listening to music themselves.

Article activity feed