Advances in Diffusion Models

The field of diffusion models is moving towards improving the generative performance and controllability of these models. Researchers are exploring ways to introduce inductive biases into the training and sampling of diffusion models, allowing for better accommodation of the target distribution of the data. This includes the use of anisotropic noise operators and spectrally anisotropic Gaussian diffusion, which have shown to outperform standard diffusion models across several vision datasets. Additionally, there is a focus on developing robust learning frameworks to address extremely noisy conditions in conditional diffusion models, and on understanding the stochasticity of samplers used during training and inference. Noteworthy papers include: Learning What Matters: Steering Diffusion via Spectrally Anisotropic Forward Noise, which introduces an anisotropic noise operator to shape the inductive biases of diffusion models. Robust Learning of Diffusion Models with Extremely Noisy Conditions, which proposes a robust learning framework to address extremely noisy conditions in conditional diffusion models.

Sources

Learning What Matters: Steering Diffusion via Spectrally Anisotropic Forward Noise

Robust Learning of Diffusion Models with Extremely Noisy Conditions

Understanding Sampler Stochasticity in Training Diffusion Models for RLHF

Temporal Alignment Guidance: On-Manifold Sampling in Diffusion Models

Mitigating the Noise Shift for Denoising Generative Models via Noise Awareness Guidance

Noise Projection: Closing the Prompt-Agnostic Gap Behind Text-to-Image Misalignment in Diffusion Models

Built with on top of