The field of diffusion models for image generation is moving towards more efficient and scalable architectures. Recent developments have focused on reducing the computational overhead of existing models while maintaining their generative performance. This has led to the exploration of alternative building blocks, such as convolutional neural networks, and the development of new training strategies and objective functions. Notably, some papers have introduced innovative approaches to accelerate diffusion model inference, including learning ODE integration and operator merging for diffusion trajectory distillation. Others have proposed forward-only diffusion models and one-step diffusion-based image compression methods, which achieve competitive performance with significant efficiency gains. Noteworthy papers include: DiCo, which introduces a compact channel attention mechanism to enhance feature diversity in convolutional diffusion models. One-Step Diffusion-Based Image Compression with Semantic Distillation, which proposes a one-step diffusion-based generative image codec with semantic distillation for improved perceptual quality. Forward-only Diffusion Probabilistic Models, which presents a simple yet efficient generative framework using a state-dependent linear stochastic differential equation.