Reconstructing Legal Responsibility for AI: A Structural Model and Case-Based Analysis from China

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As artificial intelligence (AI) systems increasingly shape legally relevant outcomes, they challenge traditional responsibility frameworks grounded in legal personhood and intent. This paper addresses the attribution dilemma posed by non-personal, automated systems by proposing a novel framework: the Constructive Interface Attribution (CIA) model. The CIA model conceptualizes legal responsibility as embedded in interface control, behavioral traceability, and regulatory design, rather than subjective volition. Drawing on doctrinal and empirical analysis, the study examines five landmark Chinese cases involving algorithmic labor management, blockchain contracts, and generative AI content. Using qualitative coding, it identifies recurring attribution patterns that rely on structural elements such as control pathways and platform governance. The model shifts legal reasoning from subject-based to structure-based logic. A comparative analysis with U.S. jurisprudence highlights institutional differences in how responsibility is constructed. The CIA model offers a replicable framework for understanding responsibility in AI-intensive contexts, contributing both to theory and regulatory design. This study aims to support more transparent and accountable legal responses to increasingly autonomous technologies.

Article activity feed