Perception-Guided EEG Analysis: A Deep Learning Approach Inspired by Level of Detail (LOD) Theory

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objectiv: This study aims to explore a novel deep learning approach for analyzing electroencephalogram (EEG) data and guiding human perceptual states, inspired by the Level of Detail (LOD) theory. The core objective is to improve the accuracy of identifying perceptual states from EEG signals and to provide new avenues for personalized psychological therapy. Methods: The research employs portable EEG devices to collect data, combined with music rhythm signals for analysis. We introduce the LOD theory to dynamically adjust the processing levels of EEG signals, extracting core features related to perception. The software system is developed using the Unity engine, integrating audio materials and MIDI structures, and enabling the integration of EEG data with Unity. The deep learning model includes a Convolutional Neural Network (CNN) for feature extraction and classification, and a Deep Q-Network (DQN) for reinforcement learning to optimize music rhythm adjustment strategies. Results: The CNN model achieved a 94.05% accuracy in the perceptual state classification task, demonstrating excellent classification performance. The DQN model successfully guided subjects' EEG signals to the target perceptual state with a 92.45% success rate on the validation set, requiring an average of 13.2 rhythm cycles to complete the state guidance. However, subjective feedback from users indicated that approximately 50% of the researchers experienced psychological sensations corresponding to the target state during the rhythm adjustment process, suggesting room for improvement in the system's effectiveness. Discussion: The experimental results validate the potential of the LOD-based deep learning algorithm in EEG biofeedback and perceptual guidance. Despite the preliminary achievements, the study has limitations, such as the single source of the dataset, the subjectivity of labels, and the need for further optimization of the dynamic adjustment mechanism in the reinforcement learning reward function. Future research will expand to diverse subject groups, introduce more varied musical elements, and explore more advanced reward functions to enhance the model's generalization ability and personalized experience.

Article activity feed