Evaluating Privacy Compliance in Commercial Large Language Models - ChatGPT, Claude, and Gemini
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The integration of artificial intelligence systems into various domains has raised significant privacy concerns, necessitating stringent regulatory measures to protect user data. Evaluating the privacy compliance of commercial large language models (LLMs) such as ChatGPT-4o, Claude Sonet, and Gemini Flash under the EU AI Act presents a novel approach, providing critical insights into their adherence to privacy standards. The study utilized hypothetical case studies to assess the privacy practices of these LLMs, focusing on data collection, storage, and sharing mechanisms. Findings revealed that ChatGPT-4o exhibited significant issues with data minimization and access control, while Claude Sonet demonstrated robust compliance with data minimization and effective data security measures. However, Gemini Flash showed inconsistencies in data collection and a higher incidence of anonymization failures. The comparative analysis underscored the importance of tailored privacy strategies and continuous monitoring to ensure regulatory compliance. These results provide valuable insights for developers and policymakers, emphasizing the necessity of a multifaceted approach to privacy compliance in the deployment of LLMs.