The field of diffusion models for image synthesis is moving towards more efficient and adaptive architectures. Recent developments have focused on improving the quality of generated images while reducing computational costs and adapting to limited resources such as bandwidth. Notable advancements include the use of sparse-dense residual fusion, lightweight adaptation layers, and model-agnostic frameworks for extending the resolution of pretrained models. These innovations have the potential to facilitate the deployment of diffusion models in diverse domains, including those with limited data. Noteworthy papers include:
- BADiff, which introduces a bandwidth adaptive diffusion model that improves the visual fidelity of generated images in bandwidth-constrained environments.
- Sprint, which presents a simple method for efficient diffusion transformers that enables aggressive token dropping while preserving quality.
- LiteDiff, which proposes a lightweight diffusion model adaptation approach that integrates adaptation layers into a frozen diffusion U-Net.
- ScaleDiff, which offers a model-agnostic and highly efficient framework for extending the resolution of pretrained diffusion models without additional training.