The field of diffusion models and generative learning is rapidly evolving, with a focus on improving the efficiency, quality, and interpretability of generative models. Recent developments have centered around techniques to reduce the computational cost of diffusion models, such as model pruning and knowledge distillation, while maintaining their generative capabilities. Additionally, there is a growing interest in incorporating representation learning into diffusion models to improve their performance and enable more effective feature extraction. Another area of research explores the application of diffusion models as teachers for downstream learning tasks, highlighting their potential as compact and interpretable knowledge transfer agents. Noteworthy papers in this area include: IGSM, which proposes a novel finetuning framework for pruned diffusion models, and Canonical Latent Representations in Conditional Diffusion Models, which identifies interpretable and compact latent representations for conditional diffusion models.