Advances in Diffusion Models and Adversarial Techniques

The field of computer vision is witnessing significant developments in diffusion models and adversarial techniques. Researchers are actively exploring the limitations and potential of these models, including their performance with non-Gaussian noise and their ability to mitigate exposure bias. Furthermore, there is a growing interest in developing methods that can certify the robustness of vision models against adversarial examples and ensure the privacy and fairness of generative models. Noteworthy papers include:

  • Adaptive Diffusion Denoised Smoothing, which proposes a method for certifying the predictions of a vision model against adversarial examples.
  • FADE, which introduces a novel concept erasure method for text-to-image diffusion models, designed to remove specified concepts from the model's generative repertoire.

Sources

The relative importance of being Gaussian

Adaptive Diffusion Denoised Smoothing : Certified Robustness via Randomized Smoothing with Differentially Private Guided Denoising Diffusion

Towards Imperceptible JPEG Image Hiding: Multi-range Representations-driven Adversarial Stego Generation

Frequency Regulation for Exposure Bias Mitigation in Diffusion Models

FADE: Adversarial Concept Erasure in Flow Models

Built with on top of