Human intention inference with a large language model can enhance brain-computer interface control: A proof-of-concept study
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Brain–computer interface (BCI) control enables direct communication between the brain and external devices. However, the accuracy of BCI control based on intentions inferred from unstable neural signals remains limited, even with data-driven approaches to tailor neural decoders. Here we propose a knowledge-driven framework for inferring human intentions, leveraging large language models (LLMs) that incorporate prior knowledge about human behavior and neural activity. We developed a neural decoder that integrates neural and oculomotor signals with contextual information using an LLM agent. Its feasibility was tested in a real-world BCI task involving interaction with a computer application. The LLM-based decoder achieved an average accuracy of 79% among participants whose neural signals were responsive (11 out of 20) in inferring the intention to select arbitrary posts in a social networking service. Ablation analyses revealed that the integration of contextual information, multimodal signals, and empirical knowledge is critical for decoding accuracy. This study demonstrates the feasibility of a neural decoding framework using an LLM, paving the way for improved performance in BCI-driven external device operation for individuals with disabilities.
Highlights
-
Large language models can infer human intent by integrating neural and oculomotor signals with screen context.
-
The proposed model outperforms conventional data-driven approaches in decoding accuracy.
-
Ablation analyses reveal that integrating contextual information, multimodal signals, and empirical knowledge is critical for accurate decoding.