Large Language Models Integrated into Brain-Computer Interfaces for Communication and Control: A Systematic Review
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The integration of Large Language Models (LLMs) into Brain-Computer Interfaces (BCIs) represents a paradigm shift in neural prosthetics, offering unprecedented opportunities to enhance communication rates, accuracy, and user experience. This systematic review synthesizes recent evidence on LLM- assisted BCIs for communication and control. Following PRISMA guidelines, we extracted data from nine included studies covering diverse BCI modalities (P300, SSVEP, cVEP, passive, auditory) and LLM integration paradigms, including autocomplete, post- edit correction, intent expansion, dynamic UI generation, and affective support. We detail the methodological pipelines, syn- thesize the quantitative outcomes (such as changes in spelling accuracy, Information Transfer Rate, and Keystroke Savings), and conduct a rigorous evaluation using established Risk of Bias tools. Results indicate that LLM integration consistently improves effective communication speed and reduces user fatigue, though the reliance on remote APIs introduces new constraints related to latency, privacy, and safety (e.g., hallucinations and unprompted corrections). We propose a taxonomy of BCI-LLM integration patterns and discuss the gap between simulated offline enhancements and real-time clinical validation. Finally, we provide standardized reporting recommendations and outline a concrete research agenda to guide the development of secure, personalized, and clinically viable LLM-assisted BCI systems.