LGFN: A Dynamic Gating Framework for Lyrics-Audio Alignment in Music Emotion Recognition
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Music emotion recognition (MER) is challenging due to the ambiguous nature of audio features. For example, a fast tempo might indicate excitement or anger. To address this issue, we propose the Lyrics-Aware Gate Fusion Network LGFN, a novel cross-modal dynamic fusion approach. This network extracts audio features using ResNet for Mel spectrograms and 1D CNN for waveforms, while leveraging BERT for lyric embeddings. The key innovation of LGFN lies in its attentional alignment layer AAL, which effectively bridges the temporal misalignment between lyrics and audio. Additionally, the dynamic gate fusion module DGFM adaptively adjusts the weight of audio and text features based on their reliability. This allows the model to automatically determine the contribution of each modality, enhancing the accuracy of emotion recognition. Ablation studies on the music emotion recognition datasets MER1101 and DEAM2015 demonstrate the effectiveness of our proposed AAL and DGFM, while visualizing the gate weights offers insights into the model’s decision-making process. We also conduct comprehensive experiments on different human multimodal emotion recognition benchmarks, which show the superiority of our approach over the state-of-the-arts.