How Foundation Models Are Reshaping Non-Invasive Brain–Computer Interfaces: A Case for Novel Human Expression and Alignment

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

SYNAPTICON is a research prototype at the intersection of neuro-hacking, non-invasive brain-computer interfaces (BCIs), and foundation models, probing new territories of human expression, neuroaesthetics, and AI alignment. Envisioning a cognitive “Panopticon” where biological and advanced synthetic intelligent systems converge, it enables a pipeline that couples temporal neural dynamics with pretrained language models and operationalizes them in a closed loop for expression. At its core lies a live “Brain Waves-to-Natural Language-to-Aesthetics” system that translates neural states (i.e. electroencephalography (EEG)) into decoded speech, and then into immersive audiovisual output and content; shaping altered perceptual experiences and inviting audiences to directly engage with the user’s mind. SYNAPTICON provides a reproducible reference for foundation-model-assisted BCIs, suitable for advanced studies of human–machine interaction (HMI).

Article activity feed