The field of generative diffusion models is rapidly advancing, with a focus on improving efficiency and reducing computational costs. Recent developments have centered on designing lightweight models, novel caching mechanisms, and training-free acceleration frameworks. These innovations have enabled faster image and video synthesis while maintaining high-quality outputs. Notably, researchers are exploring token reduction, multi-path compression, and hierarchical caching to minimize computational overhead. Additionally, training-free methods are being developed to facilitate layer-wise control and consistent outputs. Overall, the field is moving towards more accessible and controllable generative workflows. Noteworthy papers include: E-MMDiT, which proposes an efficient multimodal diffusion model with competitive results, and H2-Cache, which introduces a novel hierarchical caching mechanism for high-performance acceleration. LeMiCa and TAUE also present significant contributions, with LeMiCa delivering dual improvements in inference speed and generation quality for diffusion-based video generation, and TAUE enabling zero-shot, layer-wise image generation without requiring fine-tuning or auxiliary datasets. Furthermore, Tortoise and Hare Guidance presents a training-free strategy that accelerates diffusion sampling while maintaining high-fidelity generation.