Sampling-Efficient Unconditional Pure-Deblurring Diffusion Models via Noise-Augmented Generation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Diffusion models have proven to be powerful image priors, supporting unconditional generation and a range of conditional tasks such as image restoration and text-guided synthesis. They learn to reverse a progressive noising process, capturing regularities of the data distribution. Nevertheless, diffusion models retain drawbacks that include slow sampling, high computational cost, and nontrivial architectural complexity.Recent work has explored replacing noise-based corruption with more structured degradations, most commonly progressive blurring, which aligns with the multi-scale structure of images and has been shown to support high-quality generation. However, sampling requires introducing stochasticity into the otherwise deterministic blurring process, typically by adding Gaussian noise, which causes the reverse dynamics to remain, in effect, a denoising process.Our blurring-based approach applies Gaussian noise, solely as a data augmentation, symmetrically to both inputs and targets, allowing the network to learn a pure and simple deblurring operation, with the noise used only to regularize training and enable stochastic sampling. This separation between blur and noise yields a lightweight generative process that produces high-quality samples using only a small number of sampling steps, without requiring dedicated fast-sampling mechanisms. Empirical results confirm the effectiveness of the proposed approach and its practical viability for generative modeling.