From Embedded Ethics to Explainable AI: Advancing Interdisciplinary Collaboration in Medical AI Development

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The development of artificial intelligence (AI) for healthcare, particularly in high-stakes environments like intensive care units, presents significant interdisciplinary challenges. These challenges arise from the need to integrate diverse expertise from computer science, medicine, operations research, ethics, and social sciences, each with distinct priorities. Ethical considerations often become secondary or are reduced to compliance checklists, risking the creation of systems that are technically sound but ethically misaligned. This paper aims to address the fragmentation in interdisciplinary collaboration by proposing the first in-depth case study of the Embedded Ethics and Social Sciences (EE) framework into the development process of medical AI in a real-world project. The objective is to enhance ethical quality and strengthen interdisciplinary collaboration by providing a shared technical-ethical interface for dialogue. The paper explores the EE approach within the KISIK project, an interdisciplinary research initiative developing an AI-based clinical decision support system for intensive care units in the German healthcare system. The integration of EE facilitated greater awareness for ethically salient issues, throughout the interdisciplinary development of medical AI. Thereby, XAI emerged as a valuable extension of EE. XAI served as a technical instrument for embedding ethical considerations into system design, enhancing model interpretability and fostering communication across disciplinary boundaries. The combination of EE and XAI offers a promising foundation for interdisciplinary understanding in medical AI projects. EE fosters broader normative reflection on project goals, while XAI enables technically grounded engagement with ethical questions. This integrated approach can lead to more ethically aligned and socially responsible AI systems in healthcare.

Article activity feed