Beyond Disclosure: Reframing Privacy as Inference Impedance in Large Language Models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Contemporary debates in AI ethics continue to frame privacy primarily in terms of disclosure, identifiability, and data access. This article argues that such a framing is no longer sufficient for embedding-based artificial intelligence systems. We introduce the Deep Personal Privacy (DPP) framework, which reconceptualizes privacy as inference impedance within high-dimensional semantic representation spaces. Rather than asking whether information has been revealed, DPP evaluates how easily sensitive attributes can be inferred from latent embeddings. We model embedding spaces as semantic transmission layers that enable indirect attribute inference through geometric alignment. Privacy risk is therefore defined in terms of cosine similarity, inference probability, and logarithmic impedance within structured inference graphs. The framework integrates ontology-driven sensitive expression mapping, representation-level perturbation mechanisms, and a multi-objective optimization procedure balancing utility and privacy. Empirical demonstrations show that DPP-based interventions reduce semantic alignment with sensitive concept prototypes and increase inference resistance while maintaining acceptable task performance. Conceptually, the framework advances a paradigm shift in AI ethics: privacy must be evaluated not only by what systems disclose, but by what they are capable of inferring. DPP thus complements existing structural and statistical privacy approaches by introducing a representation-level metric for inferential power asymmetry.

Article activity feed