The Structural Tension Between AI Optimisation and Ethical Governance: Empirical Evidence from Organisational Decision-Making
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This The rapid adoption of artificial intelligence (AI) in organisational decision-making has intensified longstanding ethical concerns regarding fairness, transparency, accountability, and governance. While much existing literature emphasises efficiency gains, fewer studies empirically examine how optimisation-driven AI systems may structurally conflict with ethical principles in practice. This study investigates the ethical implications of AI-assisted decision-making across organisations, drawing on a mixed-methods design comprising surveys (n = 200), semi-structured interviews (n = 30), and multi-sector case analysis. Empirical findings reveal a persistent ethical–performance tension. Although 75% of organisations reported improved efficiency and 68% reported enhanced decision accuracy, only 45% considered their AI systems transparent and explainable. Furthermore, 38% identified algorithmic bias in deployed systems, and 50% expressed significant concerns regarding data privacy and accountability. These findings suggest that performance-optimised AI systems may inadvertently undermine core ethical requirements, creating governance vulnerabilities even in technically successful deployments. The paper contributes to debates in AI ethics by empirically demonstrating how organisational AI adoption often prioritises efficiency logics over ethical robustness. It proposes a governance-oriented framework for responsible AI integration that foregrounds transparency, ethical oversight, and institutional accountability alongside performance objectives.