The Impact of AI-Based Decision Support Systems on Human Agency in Governance

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This article examines the transformative role of artificial intelligence–based Decision-Support Systems in public governance. It aims to analyze how Decision-Support Systems reshape decision-making and accountability across domains such as recruitment, finance, healthcare, social administration, and judicial practice. The study employs legal, institutional, and comparative analysis to investigate key mechanisms of automation bias and the attribution gap, which redistribute responsibility between humans and algorithms. Academic literature, case studies, and regulatory frameworks are systematically reviewed to assess implications for justice, trust, and democratic legitimacy. The results demonstrate that Decision-Support Systems are not neutral tools but socio-technical actors that redistribute autonomy, restructure institutional practices, and challenge traditional models of accountability and transparency. The study argues for strategies that integrate explainable AI, establish traceable accountability, and safeguard meaningful human control to balance efficiency with ethical and democratic values. By highlighting both the opportunities and risks of Decision-Support Systems, the article contributes to global debates on responsible AI governance and, through the example of Kazakhstan, provides guidance for policymakers seeking to align Decision-Support Systems with principles of accountability, fairness, and human agency.

Article activity feed