The Agentic AI Security Adoption Matrix: Understanding Readiness and Resistance Across Domains

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The objective of this paper is to examine how agentic artificial intelligence (AI) is being adopted in practice, with particular attention to its security implications, and to identify domains where adoption is progressing more slowly due to regulatory, ethical, and oversight requirements. This research is important because agentic AI offers new efficiencies and autonomy but simultaneously introduces risks related to trust, accountability, and adversarial exploitation. Prior work on autonomous systems, AI governance, and security automation provides a foundation, and recent studies highlight a contrast between rapid adoption in digitally bounded, low-stakes environments and slower uptake in safety-critical contexts. Building on this foundation, the paper adopts a systematic literature review combined with comparative domain analysis to classify adoption trends. The approach draws on academic publications, industry reports, and regulatory frameworks such as the NIST AI Risk Management Framework and the EU AI Act, which explicitly designate domains like healthcare, defense, and social care as high-risk. The results indicate that agentic AI has advanced most quickly in areas such as customer service, software engineering assistance, and cybersecurity operations centers, where regulatory barriers are minimal and oversight is straightforward. By contrast, adoption remains constrained in healthcare, defense, and social care, where pilot projects exist but mainstream deployment is slowed by requirements for explainability, liability, and human-in-the-loop controls. The implications of these findings are significant for academics extending adoption models, researchers designing security frameworks, and practitioners balancing innovation with governance. The value of this paper lies in presenting a conceptual “Agentic AI Security Adoption Matrix,” which offers an original perspective on how adoption speed and security sensitivity interact, and provides guidance on where agentic AI may thrive versus where its deployment will require cautious, regulated progression.

Article activity feed