Utilizing Pretrained Vision Transformers and Large Language Models for Epileptic Seizure Prediction
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Repeated unprovoked seizures is a major source of concern for patients suffering from epilepsy. Predicting seizures before they occur is of interest to both machine-learning scientists as well as clinicians, and is an active area of research. The variability of EEG sensors, type of seizures, and specialized knowledge required for annotating the data complicates the large-scale annotation process essential for supervised predictive models. To address these challenges, we propose the use of Vision Transformers (ViTs) and Large Language Models (LLMs) that were originally trained on publicly available image or text data. Our work leverages these pre-trained models by refining the input, embedding, and classification layers in a minimalistic fashion to predict seizures. Our results demonstrate that LLM's outperforms the ViT's in patient-independent seizure prediction achieving a sensitivity of 79.02% which is 8% higher compared to ViT's and about 12% higher compared to a custom-designed ResNet-based model. Our work demonstrates the successful feasibility of pre-trained models for seizure prediction with its potential for improving the quality of life of people with epilepsy. Our code and related materials are available open-source at: https://github.com/pcdslab/UtilLLM_EPS