Clinical Insights from Social Media: Assessing Summaries of Large Language Models and Humans

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Social media platforms offer a rich but complex view of individuals' mental states. Leveraging these data for clinical monitoring requires tools that can distill vast social media timelines into concise, clinically relevant insights. This work investigates the potential of large language models (LLMs) to generate such summaries, offering a scalable approach for mental health monitoring.We first propose a hybrid modeling framework that combines a hierarchical variational autoencoder (VAE) with an LLM, pushing model performance beyond a naive LLM prompting approach. The VAE first creates a temporally-aware, first-person abstractive summary of the user's timeline. Specialized clinical prompts then guide the LLM to transform this summary into an integrative, third-person clinical narrative.We conduct a thorough assessment of the model-generated summaries, comparing them to summaries written by clinicians for 30 social media timelines. Our evaluation employs human ratings of factual accuracy, meaning preservation, and clinical usefulness, alongside a qualitative analysis by clinical experts. We utilize automatic metrics to assess summary diversity as a proxy for personalization; that is, the system's ability to capture individual idiosyncrasies.The findings reveal that while current LLMs show promise in generating factually consistent and informative summaries, and may exhibit greater comprehensiveness than human summaries, they struggle to capture nuanced psychological understanding, provide accurate mental state assessments, and achieve the same level of personalization as human clinicians.

Article activity feed