On the Privacy, Security, and Algorithmic Transparency of AI Chatbot-based Mobile Health Apps: An Empirical Investigation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Mobile health (mHealth) apps have become ubiquitous, offering services ranging from fitness tracking to managing mental health conditions. The rise of Artificial Intelligence (AI) in healthcare has further driven the integration of AI-powered chatbots, transforming these apps even more. While these chatbot-based mHealth apps provide benefits such as healthcare information and predictive diagnoses, they also raise significant concerns regarding security, privacy, and transparency of the AI models used. This study empirically assesses 16 AI chatbot-based mHealth apps we identified in the Google Play Store. We evaluated these apps following three main strategies: manual inspection, static analysis, and dynamic analysis. Our findings reveal that these apps have multiple vulnerabilities that attackers can exploit, such as enabling Remote WebView debugging. Furthermore, there is a general lack of algorithmic transparency regarding the functioning of the AI chatbots and their underlying infrastructure. Several apps were also non-compliant with Google Play policy, such as failing to provide a publicly accessible privacy policy. Based on our analysis, we offer recommendations to enhance the security, privacy, and transparency of AI chatbot-based mHealth apps. We believe the findings will be valuable to developers, security testers, and privacy engineers. Additionally, developers can leverage these insights to build trust by helping users understand how chatbots function within these apps.

Article activity feed