Adaptive Memory for LLM-Based Time Series Analysis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Static models trained on historical data fail silently when underlying market dynamics shift, a phenomenon known as concept drift. We investigate whether large language models (LLMs) equipped with structured adaptive memory can detect and adapt to regime changes in financial time series. Using seven years of hourly Bitcoin OHLCV data (2017--2024) across six labeled market regimes, we benchmark four memory architectures (regime context injection, news-weighted memory, cosine similarity-based historical matching, and rolling self-feedback) against an LSTM baseline and a memory-free LLM. For 24-hour price direction prediction, all methods perform near chance (49--51% accuracy), confirming that short-term Bitcoin forecasting remains an open challenge regardless of model architecture. For regime change detection, the primary contribution of this work, the LLM identifies 3 of 6 ground-truth transitions (50%) with a 0% false positive rate and generates structured evidence for each detection, a capability absent from all statistical baselines (CUSUM: 83% detection but no explanations; BinSeg: 33%; Bollinger Bands: 17%). We release all code, data, and prompts to enable full reproducibility. Our findings indicate that LLMs contribute not through superior predictive accuracy, but through explainable drift attribution, a qualitative advantage with practical implications for high-stakes decision-making.

Article activity feed