DMDA: Density Matrix Decomposition for Training-Free Diffusion Acceleration

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We present the Density Matrix Diffusion Accelerator (DMDA), a training-free inference acceleration framework for sequential diffusion generation. The core insight is that density matrix eigendecomposition of diffusion latent sequences reveals extreme spectral concentration — a single shared eigenmode captures the majority of total latent energy — enabling selective computation in which the shared component is processed fully only once, while per-frame perturbations are handled via lightweight residual diffusion. We validate DMDA on two architecturally distinct models on a Mac Studio M2 Ultra (MPS backend). On Stable Diffusion 1.5 (860M parameters, 512×512), DMDA achieves 9.4× step reduction (1,500 → 160 steps, 35.25s wall-clock) with dominant eigenvalue λ₁ = 0.8576 (85.76% energy). On SDXL (2.6B parameters, 1024×1024, dual text encoder), DMDA achieves 5.1× step reduction (750 → 146 steps, 151.1s wall-clock) with λ₁ = 0.7093 (70.93% energy). In both cases, generated frames exhibit visual quality equivalent to standard outputs with no perceptible degradation. DMDA is training-free, architecture-agnostic, and requires no modification to model weights or architecture. It operates on a different computational axis from existing methods — reducing inter-frame redundancy rather than per-frame step count — and composes multiplicatively with distillation-based approaches.

Article activity feed