The field of diffusion models is moving towards improving the generative performance and controllability of these models. Researchers are exploring ways to introduce inductive biases into the training and sampling of diffusion models, allowing for better accommodation of the target distribution of the data. This includes the use of anisotropic noise operators and spectrally anisotropic Gaussian diffusion, which have shown to outperform standard diffusion models across several vision datasets. Additionally, there is a focus on developing robust learning frameworks to address extremely noisy conditions in conditional diffusion models, and on understanding the stochasticity of samplers used during training and inference. Noteworthy papers include: Learning What Matters: Steering Diffusion via Spectrally Anisotropic Forward Noise, which introduces an anisotropic noise operator to shape the inductive biases of diffusion models. Robust Learning of Diffusion Models with Extremely Noisy Conditions, which proposes a robust learning framework to address extremely noisy conditions in conditional diffusion models.