The field of generative learning is rapidly advancing, with a focus on improving the efficiency and quality of diffusion models. Recent developments have centered around optimizing the sampling process, reducing the number of iterations required to produce high-quality samples, and stabilizing the training process. Notably, techniques such as optimal transport, adaptive step sizing, and hierarchical schedule optimization have been proposed to accelerate sampling and improve model performance. Additionally, research has explored the use of mean flow, representation autoencoders, and spectral self-regularization to enhance the quality and efficiency of generative models.
Some noteworthy papers in this area include: OT-ALD, which proposes a novel framework for aligning latent distributions with optimal transport to accelerate image-to-image translation. Hierarchical Schedule Optimization for Fast and Robust Diffusion Model Sampling, which introduces a bi-level optimization framework to find an optimal distribution of timesteps for diffusion model sampling. MeanFlow Transformers with Representation Autoencoders, which develops an efficient training and sampling scheme for mean flow in the latent space of a representation autoencoder. Diffusion As Self-Distillation, which unifies the three components of latent diffusion models into a single, end-to-end trainable network.