FME-24: A Film, Music, and Emotion Dataset
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper introduces an updated, publicly accessible version of the Film, Music and Emotion Dataset (FME-24), designed to examine how perceived emotion in film music evolves over time. It provides a comprehensive introduction to the dataset and explores its potential applications across music information retrieval (MIR), psychology, and AI-training contexts. The FME-24 dataset utilises film’s immersive qualities to study emotional perception in a naturalistic yet controlled setting. It contains data from 402 film scores spanning the past two decades, including experimental and mainstream works. The dataset integrates high-quality film compositions with time-stamped valence-arousal (V-A) annotations, emotion sentences, familiarity ratings, and detailed metadata. 93 Participants originally contributed to annotating these temporal emotion features. For each time-stamped point, a two-second audio segment was analysed, and 152 features were extracted, including low-level timbral descriptors (MFCC statistics, spectral centroid), rhythmic descriptors (onset density, tempo), and higher-level psychoacoustic and tonal features (inharmonicity, roughness, chord transitions, tonal entropy). Although full audio files are unavailable due to licensing, reproducibility is ensured via ISRC codes, precise segment timings, and open access to all metadata and feature files in CSV format. The paper details the dataset’s structure, annotation, and feature-extraction procedures, highlighting applications in computational and perceptual research and laying a foundation for future studies on emotion, perception, and narrative in film music.