Decoding Temporally Encoded 3D Objects from Low-Cost Wearable Electroencephalography
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Decoding visual content from neural activity remains a central challenge at the intersections of engineering, neuroscience, and computational modeling. Prior work has primarily leveraged electroencephalography (EEG) with generative models to recover static images. In this study, we advance EEG-based decoding by introducing a temporal encoding framework that approximates dynamic object transformations across time. EEG recordings from healthy participants (n = 20) were used to model neural representations of objects presented in “initial” and “later” states. Individualized classifiers trained on time-specific EEG signatures achieved high discriminability, with Random Forest models reaching a mean accuracy and standard deviation of 92 ± 2% and a mean AUC-ROC and standard deviation of 0.87 ± 0.10, driven largely by gamma- and beta-band activity at the frontal electrodes. These results confirm and extend evidence of strong interindividual variability, showing that subject-specific models outperform intersubject approaches in decoding temporally varying object representations. Beyond classification, we demonstrate that pairwise temporal encodings can be integrated into a generative pipeline to produce approximated reconstructions of short video sequences and 3D object renderings. Our findings establish that temporal EEG features, captured using low-cost open-source hardware, are sufficient to support the decoding of visual content across discrete time points, providing a versatile platform for potential applications in neural decoding, immersive media, and human–computer interaction.