Transparency Beyond Accuracy

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With the unceasing development in machine learning, deep learning and Artificial Intelligence as a whole, the demand for providing reasoning for the decisions and predictions prove to be paramount. This review paper discusses the importance of explainability in using Artificial Intelligence across the domains of credit risk scoring and the medical sector. The primary objective of this review paper is to compare and contrast the necessity of explainability when decisions are made using Artificial Intelligence. These decisions could prove to cause significant effects in these industries. The ethical and regulatory necessities that cause the need for transparency in the domains are rigorously examined. The examination suggests how explainability in credit scoring is driven primarily by factors concerning legal requirements and rationality, whereas the medical sector utilises explainability to augment freedom from suspicion, maintain patient centred care, ethical and moral implications, identifying errors and detecting bias. The findings as a result of the review done suggest on a surface level that while explainable Artificial Intelligence(XAI) benefits both domains, the methodologies and techniques to achieve explainability differ from sector to sector. This research spotlights the importance of context in highlighting how and why AI models should be explainable.

Article activity feed