Reconstructing Temporally Encoded 3D Objects from Low-Cost Electroencephalography

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Turning images and scenes from imagination and memory has applications from engineering to artistic expression. Electroencephalography (EEG) is a non-invasive technique for recording the brain’s electrical activity via scalp electrodes, accessible with low-cost headsets. Previous work used EEG to encode images with the assistance of a generative adversarial network (GAN), allowing EEG-based image reconstruction. Successive images, encoding objects at separate temporal points, were used to train a classification system. EEG data from healthy participants (N = 20) were used to encode images, each divided into an “initial state” and a “later state.” A modified “one versus rest” system using a random forest classifier was used for both offline and online use. Compared to the intersubject model, the individualized models worked most reliably with gamma and beta features on frontal electrodes, reaching a mean accuracy of 92 ± 4%, a mean F1 score of 0.64 ± 0.08, and a mean AUC-ROC of 0.87 ± 0.09. In line with prior literature, changes in spectral activity across the brain were also observed. The “paired” images of objects were converted into short films and 3D objects with the assistance of a ComfyUI pipeline. The system uses temporal encoding to capture dynamic object transformations, reliably reconstructing time-specific representations from EEG despite limitations, demonstrating potential for scalable, real-time visual memory reconstruction in research, industry, and art.

Article activity feed