Insights into suggested Responsible AI (RAI) practices in real-world settings: A systematic literature review

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

AI-enabled systems have significant societal benefits, but only if they are developed, deployed, and used responsibly. We systematically review 45 empirical studies in real-world settings to identify suggested Responsible AI (RAI)practices to ensure that AI-enabled systems uphold stakeholders' legitimate interests and fundamental rights. Our findings highlight eleven areas of suggested RAI practices: harm prevention, accountability, fairness and equity, explainability, AI literacy, privacy and security, human-AI calibration, interdisciplinary stakeholder involvement, value creation, RAI governance, and AI deployment effects. Our findings also show that there are more discussions about how RAI is supposed to be practiced than existing RAI practices. Ad hoc implementation of RAI practices in real-world settings is concerning because almost 80% of the AI-enabled systems reported in the 45 included articles are applied in use cases that can be categorised as high-risk settings, and over half are reported in the deployment phase. Our findings also highlight the crucial role of stakeholders in ensuring RAI. Identifying stakeholders into user, non-user, and primary stakeholders can thus help understand the dynamics of the settings where AI-enabled systems are (to be) deployed and guide the implementation of RAI practices. In conclusion, although there is a consensus that RAI practices are a necessity, their implementation in real-world is still in its early day. The involvement of all relevant stakeholders is irreplaceable in driving and shaping RAI practices. There is a need for more comprehensive and inclusive RAI research to advance RAI practices in real-world settings.

Article activity feed