The fragmentation of responsible AI: Sector variation in organisational AI policies and statements of principle
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The AI landscape has been changing dramatically with the growth of generative AI, and many governments, sector bodies, and individual organisations are working in parallel to stake out their own visions of ethical and responsible AI use. Statements, guidelines, and policies on responsible AI have proliferated in the last five years, yet clarity remains elusive on what ‘responsible AI’ means or how to put it into practice. Research to date has not examined how the principles and practices of responsible AI may vary across sector, an increasingly important concern as AI use grows throughout the economy and society. In this article, we empirically examine inter- and intra-sectoral variance in the principles articulated for responsible AI in organisational policies. We analysed 80 documents from organisations in 8 sectors, focusing on policies current at the time of the 2024 AI Seoul Summit. Our content analysis identified 31 distinct principles in these policies, only ten of which appeared in more than 50% of the documents. We found clear sectoral differences in the principles invoked for responsible AI, as well as the audiences who were intended to engage with putting those principles into practice. Our analysis focused on organisations shaping responsible AI in a single nation, the United Kingdom, but our findings illustrate the admixture of national and international actors affecting AI practice. Our findings show that responsible AI is increasingly fragmented, and that an understanding of sector-level variation is essential to shaping the future of responsible AI.