Transcending ECoG Spatial Sampling Limits: Expanding Virtual Neural Landscapes with Generative AI for Motor Decoding

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Electrocorticography (ECoG)-based brain-computer interfaces (BCIs) offer high-fidelity motor control, yet their utility is fundamentally constrained by surgical window. Clinical risks often restrict electrode placement, imposing a “spatial sampling limit” that impairs decoding performance. In this study, we present a generative AI framework based on Denoising Diffusion Probabilistic Models (DDPM) that transcends these limitations by synthesizing physiologically plausible virtual ECoG signals in unrecorded cortical regions. By conditioning the generative process on recorded signals, our framework reconstructs missing neural landscapes. Intra-subject validation confirms that the proposed framework harnesses cross-regional dependencies manifested in movements to generate virtual signals mirroring real neural counterparts. We then take an inter-subject transfer approach, using population-level inter-regional priors to synthesize virtual signals for new subjects. Incorporating these virtual signals into motor decoders consistently outperforms decoding based on implanted electrodes alone. Our results demonstrate that the ECoG synthesis via generative AI framework effectively expands the functional field of view of intracranial interfaces, providing a non-surgical pathway to robust ECoG-based BCIs.

Article activity feed