Impact Assessment Requirements in the GDPR vs the AI Act: Overlaps, Divergence, and Implications

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Under the EU General Data Protection Regulation (GDPR), the processing of personal data with “new technologies”, including Artificial Intelligence (AI), requires conducting a Data Protection Impact Assessment (DPIA) to evaluate potential risks to the rights and freedoms of individuals. In addition to identifying categories of processing that require a DPIA, the GDPR empowers national Data Protection Authorities (DPAs) to define additional categories where a DPIA is necessary. The recently adopted AI Act classifies AI technologies according to their level of risk to health, safety, and fundamental rights. For systems presenting high risk, the AI Act requires a Fundamental Rights Impact Assessment (FRIA) to be conducted, which represents an additional requirement for AI systems already subject to a DPIA under the GDPR. This context thus raises the question of how these two regulations work together and how their enforcement can be harmonised. This paper analyses DPIA requirements collected from all the 27 EU and 3 EEA countries which implement the GDPR, and compares them with the FRIA requirements defined in the AI Act. We show there are overlaps and divergences across national requirements to conduct impact assessments for the use of AI. Based on this, we argue for the need to harmonise the DPIA requirements across the EU/ EEA for an effective implementation of the GDPR, to improve the alignment with the AI Act, and to facilitate the sharing of risk assessment information earlier in the AI value chain to guide responsible innovation.

Article activity feed