Requirements on Interpretation Tools for AI Systems

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This work formulates generic requirements on AI interpretation tools as alluded to in the European AI Act. Methods to quantify predictive uncertainties and methods referred to as so-called explainable AI may be considered examples of interpretation tools. The requirements aim to ensure that the deployer of an AI system can use such tools to assess and assure the quality and proper functioning of the system. I argue that the concrete purpose of an interpretation tool needs to be specified, the information provided through its output needs to be unambigously defined, the utility of the output for serving the specified purpose needs to be substantiated, and evidence needs to be provided that the output is accurate and precise and, hence, that the tool’s intended purpose can be fulfilled.

Article activity feed