Single-trial EEG-based classification reveals Instrument-Specific Timbre Perception via traditional Machine Learning Classifiers
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Many users of hearing aids report challenges when listening to music. In the future, it may be possible to develop hearing aids that monitor brain activity in real-time and adapt their output to the volitions of the user. In music, this could mean selectively amplifying the sound of the instrument the listener wants to hear. The first steps in this research is to determine whether machine learning can be used to identify which instrument an individual is listening to based only on a brief EEG signal. In this work, participants were presented with a series of brief tones that varied in timbre (Trombone, Clarinet, Cello, Piano and Pure Tone) while their ongoing EEG was recorded from 73 electrodes. To distinguish between EEG responses to the five different musical instruments, we investigated the use of three different classifiers – Linear Discriminant Analysis (LDA), Gradient Boosting (GB), and k-NN, and four different sets of features – raw EEG, ERP-based features, harmonics-based features and regularity-based features. N1 and P2 components of the ERP were analyzed for differences between instruments. All three classifiers performed significantly above chance (i.e., approximately 20% for 5 classes) when trained using the raw EEG features (LDA: 37%, GB: 35%, k-NN: 26%). It may be possible to improve these results with more advanced classification algorithms or different transformations of features. Statistical analysis found the Cello to have contributed to the largest P2 amplitude and Pure Tone to the smallest, and for Cello to have contributed to the earliest N1 latency and Clarinet the latest.