Algorithmic Accountability in Public Administration: A Systematic Review and Conceptual Framework for Responsible AI Governance

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

(This manuscript is a preprint and has not been peer reviewed.)The increasing use of artificial intelligence (AI) in government decision-making has raised important questions about accountability in public administration. While AI technologies offer opportunities to improve efficiency, data analysis, and public service delivery, the integration of algorithmic systems into administrative processes also introduces new governance challenges related to transparency, responsibility, and democratic oversight. This study examines how algorithmic accountability is addressed in the existing literature on artificial intelligence in the public sector. Using a systematic literature review guided by the PRISMA framework, the study analyzes 45 peer-reviewed publications drawn from major academic databases. The findings identify five key governance dimensions discussed in the literature: transparency in algorithmic decision-making, explainability of AI systems, human oversight and administrative responsibility, ethical governance of artificial intelligence, and public trust in digital government. Based on these findings, the study proposes a conceptual framework that explains how these governance mechanisms interact to support accountable algorithmic decision systems in public administration. The framework extends traditional public administration theories of accountability to the emerging governance challenges created by algorithmic decision systems.

Article activity feed