Large Language Models for Accurate Mental Health Screening: Identifying Clinical Phenotypes in ED Healthcare Workers
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The emergency department (ED) is crucial to the healthcare system. ED healthcare workers (HCWs) play a vital role in providing essential healthcare under demanding conditions. Traditional mental health assessment tools often rely on clinical interviews and psychometric assessments, which can be time-consuming, costly, and subject to biases. We aimed to identify heterogenous clinical profiles using a person-centered clustering approach based on burnout, depression, and PTSD symptomatology. We additionally investigated whether these phenotypes could be predicted by narratives about work-related future expectations using a Large Language Model (LLM). Based on n = 199 ED HCWs from an ongoing NIH-funded study (R01HL156134), k -means clustering revealed a High- and Low-Symptom Phenotype, significantly differentiating severity levels. Zero-shot LLM prompt engineering accurately predicted those clinical phenotypes from work-related narratives (accuracy = 70.9%; F1-score = 71.8%; sensitivity = 77.1%) based on key domain-specific indicators identified in the LLM’s reasoning. Our approach leverages LLMs based on unstructured narratives, offering an objective, time-efficient alternative that enhances early risk stratification and fosters a stigma-free environment for mental health assessment in high-stress healthcare settings, revealing subtle but meaningful variations in symptomatology. Future research should incorporate indicators such as risk factors and symptom dynamics to refine this tool for scalable, person-centered mental health monitoring across diverse healthcare settings.