Biased Bots: An Empirical Demonstration of How AI Bias Could Compromise Mental Healthcare
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background: The proliferation of artificial intelligence (AI) applications for mental health has advanced in recent years and shows promise to increase the reach, scope, and impact of mental healthcare. However, biases in algorithms designed to assess and treat mental health problems pose risk for equitable mental health. This cross-sectional study investigates the existence of bias in algorithms for detecting stress from mobile devices and its implications for mental health equity.Methods: A diverse sample of young adults (N = 212) carried smartphones, wore physiological sensors, and completed hourly surveys assessing their subjective stress for 24 hours. We then developed a Twin Neural Network machine learning (ML) model to detect hourly stress from smartphone and wearable data and evaluated model performance across gender and ethnic/racial status. Findings: The model performed moderately well overall yet showed significant variation in performance ranging from poor to good across gender and ethnic/racial groups. In particular, the model evidenced lower performance in women compared to men and overestimated the frequency of stress episodes for Hispanic/Latina women.Interpretation: Findings highlight the presence of bias in AI applications for mental health and underscore the need for cautious interpretation of ML outcomes in historically underrepresented groups. Discussion focuses on the implications of AI bias for mental health and the importance of developing methods that combine AI and social justice perspectives to ensure the implementation of equitable mental healthcare.Funding: This project is based on work supported by NIMH Grant No. R42MH123368 (Timmons, Comer, Ahle, Co-PIs), NSF GRFP Grant No. 1930019 (Timmons, PI), NSF Grant No. BCS-1627272 (Margolin, PI), SC CTSI (NIH/NCATS) Grant No. UL1TR000130 (Margolin, PI), NIH-NICHD Grant No. R21HD072170-A1 (Margolin, PI), NSF GRFP Grant No. DGE-0937362 (Timmons, PI), an APA Dissertation Award (Timmons, PI), NSF GRFP Grant No. DGE-0937362 (Han, PI), and NSF Grant No. 2046118 (Chaspari, PI).Keywords: Artificial intelligence; bias; equitable mental health; multimodal stress sensing