Remote Optical Decoding of Inner Speech in Broca’s Area via AI-based Speckle Pattern Analysis
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Over 20 million people worldwide with stroke or Parkinson's disease experience speech loss, yet surgical risks or contact sensitivities exclude many from current brain-computer interfaces (BCIs). Here, we report a contactless approach for decoding binary inner speech ("yes" versus "no") from laser speckle-pattern dynamics recorded over Broca's area. Using deep learning on millions of speckle-pattern video frames from 10 healthy volunteers, classifiers achieved a mean AUC of 0.97 and an accuracy of 95.7% for 40-ms inputs (10-fold cross-validation on 3,180 seconds of balanced recordings), with minimal calibration (one minute per subject). This work demonstrates that representations learned with a long-video masked autoencoder (LV-MAE) are effective for speckle-based cortical brain imaging, establishing the feasibility of applying self-supervised long-video architectures to this modality. The forehead control recordings yielded below-chance separability on the control validation set, consistent with the absence of a stable discriminative signal from the control region and with a contribution from cortical signals. These results demonstrate proof of concept for contactless binary inner speech decoding in healthy volunteers. Translation to real-world BCI applications will require clinical validation in patient populations and extension beyond binary vocabulary.