Advances in Diffusion Models and Generative Learning

The field of diffusion models and generative learning is rapidly evolving, with a focus on improving the efficiency, quality, and interpretability of generative models. Recent developments have centered around techniques to reduce the computational cost of diffusion models, such as model pruning and knowledge distillation, while maintaining their generative capabilities. Additionally, there is a growing interest in incorporating representation learning into diffusion models to improve their performance and enable more effective feature extraction. Another area of research explores the application of diffusion models as teachers for downstream learning tasks, highlighting their potential as compact and interpretable knowledge transfer agents. Noteworthy papers in this area include: IGSM, which proposes a novel finetuning framework for pruned diffusion models, and Canonical Latent Representations in Conditional Diffusion Models, which identifies interpretable and compact latent representations for conditional diffusion models.

Sources

IGSM: Improved Geometric and Sensitivity Matching for Finetuning Pruned Diffusion Models

Learning to Weight Parameters for Data Attribution

Diffuse and Disperse: Image Generation with Representation Regularization

Revisiting Diffusion Models: From Generative Pre-training to One-Step Generation

DGAE: Diffusion-Guided Autoencoder for Efficient Latent Representation Learning

Canonical Latent Representations in Conditional Diffusion Models

Dense Associative Memory with Epanechnikov Energy

The Diffusion Duality

Built with on top of