Value-driven AI Governance
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Value-driven governance is a prominent topic in AI regulation, with both state actors and tech companies professing a commitment to values like fairness and safety. Despite this seeming agreement, we know little how normative principles are interpreted, operationalized, and assigned responsibilities within particular contexts. This study compares the values invoked within the European Union’s AI Act with policy documents of five major AI companies—OpenAI, Anthropic, Google AI, Meta AI, and Mistral AI. Through a combination of frequency analysis and inductive keyword-in-context analysis, we identify prioritized values and map how they are specified in public legislation and corporate policies. Both public and private actors largely invoke the same values (accuracy, authenticity, control, improvement, privacy, safety, and security) but sometimes differ in their specification. While values like privacy, security, and safety typically mean the same thing in policy discourses, we found stark differences in the understanding of improvement, tensions between technical and normative operationalizations of values, and shifts of responsibilities for upholding values from one stakeholder to another. Value specifications surface the politics of values in AI regulations, exposing how private actors employ polysemy to claim alignment with the public interest while avoiding substantial accountability. When it comes to value-driven governance, the devil is in the details.