Proposing PaFRIA: A Participatory Fundamental Rights Impact Assessment Process Aligned with the EU AI Act for High-Risk AI System Deployers

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Assessing the impact of AI systems on fundamental rights (FRIA) is a core obligation under the EU AI Act for certain high-risk AI system deployers. To support compliance by August 2, 2026, the AI Office is mandated to develop a FRIA template; however, no such template currently exists. Furthermore, explaining the FRIA obligations, Recital 96 suggests stakeholder involvement in FRIA. To address the gap in guidance and support stakeholder involvement as suggested in Recital 96, we propose PaFRIA, a participatory, AI Act–aligned FRIA. It assists AI deployers in complying with the FRIA obligations (Art. 27) of the AI Act, and facilitates stakeholder involvement in the assessment process. We iteratively refined PaFRIA through participatory engagement with deployers, affected end-users, and experts. The iterations surfaced six clusters of tensions, which we translate into basic principles for designing a participatory AI Act-aligned FRIA. We present the resulting PaFRIA and reflect on key design insights from the participatory development process.

Article activity feed