Robustness as Latent Symmetry: A Theoretical Framework for Semantic Recovery in Deep Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Adversarial robustness in Deep Learning (DL) has been classically framed asan issue of input-space perturbation defense. In this paper, we put forward afundamentally different view: robustness as the recovery of semantic meaningfrom perturbed information, based in structured, high-dimensional latent spaces.Motivated by manifold hypothesis, differential geometry, and the principles fromcognitive science, we frame each observation as a lossy and non-invertible pro-jection of a latent state manifold. We frame robustness as the competency toalign and recover such latent states over modalities, even during the presence ofadversarial perturbations.Primary to our formulation lies the idea that latent space consists of inter-nal symmetries, modeled utilizing Lie groups. These symmetries operate asinvariants that stabilize semantic representations during transformation, allow-ing projection-based purification and recovery. We demonstrate how this framingunifies ideas from generative models, equivariant architectures, and biologicallyvalid perception into a common geometric structure. This paper provides a foun-dation towards developing AI models whose robustness arises from semantics andstructures – instead of surface-level defense.

Article activity feed