Connecting a P300 speller to a large language model

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The advent of large-language models (LLMs) offers a transformative approach for improving the performance of brain-computer interface (BCI) spellers. We propose a novel framework that leverages the contextual understanding of LLMs to compensate for imperfect BCI decoding. Using existing P300 speller data, we simulated a system where users select letters to form words, generating text with characteristic spelling errors. This output is then processed by an LLM, which corrects the errors – a task that becomes more effective when the model considers full-sentence context. Our findings suggest that this synergy can accelerate communication rates by relaxing the need for high single-character accuracy. Beyond speed, integrating an LLM transforms the BCI into an intelligent agent, capable of acting as a discussant and assistant, thereby enriching the user experience.

Article activity feed