Advances in Diffusion Models

The field of diffusion models is moving towards addressing the limitations and vulnerabilities of current architectures. Researchers are exploring new techniques to improve the robustness and generalization of diffusion models, including the development of novel sampling guidance strategies and methods to mitigate adversarial attacks. Notable advancements include the discovery of collapse errors in ODE-based diffusion sampling and the proposal of innovative defense mechanisms against backdoor attacks. Theoretical understanding of memorization and generalization in diffusion models is also being investigated, providing valuable insights for real-world deployments.

Noteworthy papers include: On the Collapse Errors Induced by the Deterministic Sampler for Diffusion Models, which identifies a previously unrecognized phenomenon in ODE-based diffusion sampling. PromptFlare proposes a novel adversarial protection method designed to protect images from malicious modifications facilitated by diffusion-based inpainting models. Dual Orthogonal Guidance for Robust Diffusion-based Handwritten Text Generation introduces a novel sampling guidance strategy to improve content clarity and style variability. On the Edge of Memorization in Diffusion Models provides a theoretical understanding of the interplay between memorization and generalization in diffusion models. Sealing The Backdoor proposes a method to selectively erase the model's learned associations between adversarial text triggers and poisoned outputs. Unleashing Uncertainty introduces a novel method for Machine Unlearning in diffusion models. FW-GAN proposes a one-shot handwriting synthesis framework that generates realistic, writer-consistent text from a single example.

Sources

On the Collapse Errors Induced by the Deterministic Sampler for Diffusion Models

PromptFlare: Prompt-Generalized Defense via Cross-Attention Decoy in Diffusion-Based Inpainting

Dual Orthogonal Guidance for Robust Diffusion-based Handwritten Text Generation

On the Edge of Memorization in Diffusion Models

Sealing The Backdoor: Unlearning Adversarial Text Triggers In Diffusion Models Using Knowledge Distillation

Unleashing Uncertainty: Efficient Machine Unlearning for Generative AI

FW-GAN: Frequency-Driven Handwriting Synthesis with Wave-Modulated MLP Generator

Built with on top of