Advances in Diffusion Models for Image Synthesis and Editing

The field of image synthesis and editing is rapidly advancing, with a focus on improving the quality and controllability of generated images. Recent developments in diffusion models have led to significant improvements in image synthesis, including better preservation of text-image alignment and reduced color distortions. Additionally, researchers are exploring new applications of diffusion models, such as counterfactual image generation and video editing, which require careful consideration of causal relationships and temporal consistency. Noteworthy papers in this area include Angle Domain Guidance, which proposes a novel approach to mitigating color distortions in image synthesis, and Causally Steered Diffusion, which introduces a framework for counterfactual video generation that maintains causal relationships between frames. Other notable works, such as Aligned Novel View Image and Geometry Synthesis and Decoupled Classifier-Free Guidance, demonstrate the potential of diffusion models for tasks like novel view synthesis and attribute editing. Overall, the field is moving towards more sophisticated and controllable image synthesis and editing methods, with a focus on preserving causal relationships and achieving high-quality results.

Sources

Angle Domain Guidance: Latent Diffusion Requires Rotation Rather Than Extrapolation

Preserving Clusters in Prompt Learning for Unsupervised Domain Adaptation

Aligned Novel View Image and Geometry Synthesis via Cross-modal Attention Instillation

Decoupled Classifier-Free Guidance for Counterfactual Diffusion Models

Causally Steered Diffusion for Automated Video Counterfactual Generation

Align Your Flow: Scaling Continuous-Time Flow Map Distillation

Expressive Score-Based Priors for Distribution Matching with Geometry-Preserving Regularization

Cost-Aware Routing for Efficient Text-To-Image Generation

One-shot Face Sketch Synthesis in the Wild via Generative Diffusion Prior and Instruction Tuning

When Model Knowledge meets Diffusion Model: Diffusion-assisted Data-free Image Synthesis with Alignment of Domain and Class

Control and Realism: Best of Both Worlds in Layout-to-Image without Training

Built with on top of