Towards Continuous Explainability of Complex AI Systems: Requirements and Challenges

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Increasing regulatory influences on artificial intelligence (AI) impose new requirements on the trustworthiness of complex AI systems, i.e., compound software systems using at least one AI method. When embedded in value chains and other regulatory scopes, it must be ensured that AI activities and AI-generated results remain explainable throughout the entire process. This work is motivated by the domain of AI-based requirements engineering (RE) in the automotive industry, where the overall objective is to meet homologation requirements with AI-generated contents influencing system development. In support thereof, this paper presents two contributions: an orientation framework for the alignment of individual use cases in this domain and a set of requirements for AI explanation design following the framework.

Article activity feed