Establishing a Real-Time Biomarker-to-LLM Interface: A Modular Pipeline for HRV Signal Acquisition, Processing, and Physiological State Interpretation via Generative AI
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models can summarize research, generate clinical reasoning, and carry on convincing conversations. But for all their linguistic power, they rely entirely on what we tell them — subjective reports, delayed inputs, and filtered impressions. If we want them to become true partners in learning, decision-making, or care, they need something more: biosignals, not just words.Therefore, we present a streamlined architecture for routing real-time heart rate variability (HRV) data from a wearable sensor directly into a generative AI environment. Using a validated HRV sensor, we decoded Bluetooth-transmitted R-R intervals via a custom Python script and derived core HRV metrics (HR, RMSSD, SDNN, LF/HF ratio, pNN50) in real time. These values were published via REST and WebSocket endpoints through a FastAPI backend, making them continuously accessible to external applications — including OpenAI’s GPT models.The result: a live data pipeline from autonomic input to conversational output. A language model that doesn’t just talk back, but responds to real-time physiological shifts in natural language. In multiple proof-of-concept scenarios, ChatGPT accessed real-time HRV data, performed descriptive analyses, generated visualizations, and adapted its feedback in response to autonomic shifts induced by low and high cognitive load. This system marks an early prototype for bioadaptive AI — where your body becomes part of the prompt.