The field of computer vision is witnessing significant developments in diffusion models and adversarial techniques. Researchers are actively exploring the limitations and potential of these models, including their performance with non-Gaussian noise and their ability to mitigate exposure bias. Furthermore, there is a growing interest in developing methods that can certify the robustness of vision models against adversarial examples and ensure the privacy and fairness of generative models. Noteworthy papers include:
- Adaptive Diffusion Denoised Smoothing, which proposes a method for certifying the predictions of a vision model against adversarial examples.
- FADE, which introduces a novel concept erasure method for text-to-image diffusion models, designed to remove specified concepts from the model's generative repertoire.