A Combinatorial Framework for Multi-Logic Vulnerability Analysis in Neural Networks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Neural networks, powering critical applications from autonomous vehicles to large language models (LLMs), face escalating threats from sophisticated multi-logic attacks that combine distinct manipulation strategies. We propose a novel combinatorial framework that formalizes these strategies as atomic logics and enumerates their compositions to create a comprehensive taxonomy—a “periodic table”—of known and predicted attack types. By mapping existing literature (2015–2025), we cover ~55% of the harmful attack space, identifying 17 known and 14 unexplored combinations. Unlike fragmented studies focusing on isolated attacks, our systematic approach predicts novel threats, such as adversarial inputs redirecting execution paths, and guides proactive defense development. This framework offers researchers and practitioners a pioneering tool to anticipate future vulnerabilities, enhance AI security, and develop automated vulnerability testing pipelines.

Article activity feed