ARES-SI: Adaptive Reinforcement-Enhanced Sampling for Suicidal Ideation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Suicidal ideation (SI) fluctuates rapidly within individuals, with approximately 50% of its variance occurring within-person. Standard ecological momentary assessment (EMA) protocols use fixed schedules that fail to capture these rapid fluctuations, missing critical risk windows and opportunities for intervention. Here, we present ARES-SI (Adaptive Reinforcement-Enhanced Sampling for Suicidal Ideation), a reinforcement learning system that personalizes EMA timing based on individual risk patterns. ARES-SI combines random forest models to predict active SI probability and non-response likelihood with a Q-learning algorithm to determine optimal assessment intervals (30 minutes for high-risk states, 3 hours for low-risk states, and next day to reduce burden). We trained the system using EMA data from 98 participants across three studies with varied sampling frequencies (4–15 assessments per day; 2,359 positive active SI responses) over 28-day protocols. In validation testing (N = 36; 353 positive active SI responses), the random forest models achieved an AUC of 0.865 for active SI prediction and 0.750 for non-response prediction. ARES-SI demonstrated 65% higher action selection accuracy than random scheduling (0.781 vs. 0.474; macro F1 = 0.650 vs. 0.377), with superior detection of high-risk windows (recall = 0.810 for 30-minute assessments vs. 0.392 for random) and efficient low-risk scheduling (precision = 0.972 for 3-hour assessments vs. 0.876 for random). These findings demonstrate that reinforcement learning can identify when individuals are most vulnerable, providing the temporal precision needed for just-in-time adaptive interventions in suicide prevention.

Article activity feed