Modular Meta-Evolutionary AI Architecture Enables Interpretable Stratification in Heterogeneous Clinical Trials
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Foundation models are poorly matched to small, heterogeneous clinical datasets where transparency and auditability are required. We describe a modular, meta-evolutionary architecture that pairs an interpretable dynamical-systems learner (NetraAI) that leverages a long-range memory (LRM) mechanism to identify stable, outcome-linked Model-Derived Subgroups (MDS) and abstain when evidence is insufficient, coupled to a literature-grounded large language model (LLM) Strategist used for structured scientific critique and robustness testing. Across three datasets: CATIE schizophrenia (olanzapine vs perphenazine), CAN-BIND depression (escitalopram response), and COMPASS pancreatic cancer (GnP vs FOLFIRINOX), standard classifiers achieved near-chance whole-cohort prediction (AUC 0.51-0.57), whereas NetraAI identified compact 2-4 variable MDS yielding high discrimination within high-confidence patients (AUC up to ~1.0), including a 3-SNV signature with high C-for-benefit (0.92) for regimen selection in PDAC. Combining dynamical-systems subgroup discovery with LLM-guided scientific critique embodies the AI modularity hypothesis – showing how distinct computational models can jointly transform small, heterogeneous clinical datasets into concise, interpretable patient stratifications.