Fine-tuned large language models enhance influenza forecasting
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Influenza-like illness (ILI) continues to present significant challenges to global health, highlighting the need for accurate forecasting to guide timely public health responses. Traditional statistical and deep learning models, though widely applied, often face difficulties in capturing complex nonlinear dynamics and addressing data scarcity. This study examines the potential of fine-tuned large language models (LLMs), including Llama2 and GPT2, for multi-step influenza forecasting. A specialized fine-tuning framework is introduced, incorporating custom embeddings and a prediction block, and evaluated on seven real-world ILI surveillance datasets. Extensive benchmarking against SARIMA, LSTM, and PatchTST demonstrates that fine-tuned LLMs consistently deliver superior accuracy and stability, with particular advantages in long-term forecasts. Pre-trained LLMs, while able to capture broad temporal patterns in zero-shot scenarios comparable with SARIMA, gain substantial improvements in precision through fine-tuning. These results establish fine-tuned LLMs as practical and robust solutions for influenza forecasting, offering new opportunities to enhance epidemic modeling in data-limited public health environments.