LPCI: Defining and Mitigating a Novel Vulnerability in Agentic AI Systems
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper introduces Logic-layer Prompt Control Injection (LPCI), a novel vulnerability class that exploits persistent memory and execution pathways in Large Language Models (LLMs). Unlike surface-level injections, LPCI operations embed malicious logic within vector stores, retrieval systems, and tool outputs, allowing payloads to remain dormant and execute contextually upon later activation. We formalize the LPCI attack through a six-stage lifecycle (reconnaissance, injection, trigger execution, persistence, evasion, and trace tampering). The feasibility and severity of LPCI are demonstrated through a large-scale empirical evaluation of over 1,700 adversarial trials across five major LLM platforms, revealing a cross-platform attack success rate of 43\%. Targeted demonstrations on production systems further confirm critical vulnerabilities in memory retention and logic mediation. To mitigate these vulnerabilities, we further propose the Qorvex Security AI Framework (QSAF), a defense-in-depth architecture that integrates runtime memory validation, cryptographic tool attestation, and context-aware filtering. The implementation of QSAF in our tests reduced the LPCI attack success rate from 43\% to 5.3\%. Our findings necessitate a paradigm shift from static input-output filtering toward runtime enforcement of logic-layer integrity to secure next-generation AI systems.