Safe and Responsible Generative Models

The field of generative models is moving towards safer and more responsible models. Recent works have focused on developing methods to erase unwanted concepts from generative models, making them more reliable and trustworthy. The idea of machine unlearning has also gained significant attention, with researchers proposing innovative approaches to remove unwanted information from trained models. These advancements have the potential to mitigate safety and copyright concerns associated with generative models. Noteworthy papers in this area include those that propose novel concept erasure objectives, gradient-aware immunization techniques, and distributional unlearning frameworks. These papers demonstrate the feasibility of creating generative models that are not only highly performant but also safe and responsible.

Sources

Minimalist Concept Erasure in Generative Models

GIFT: Gradient-aware Immunization of diffusion models against malicious Fine-Tuning with safe concepts retention

Distributional Unlearning: Forgetting Distributions, Not Just Samples

Machine Unlearning for Streaming Forgetting

Towards Resilient Safety-driven Unlearning for Diffusion Models against Downstream Fine-tuning

Finding Dori: Memorization in Text-to-Image Diffusion Models Is Less Local Than Assumed

An h-space Based Adversarial Attack for Protection Against Few-shot Personalization

Generalized Dual Discriminator GANs

CA-Cut: Crop-Aligned Cutout for Data Augmentation to Learn More Robust Under-Canopy Navigation

Machine Unlearning of Traffic State Estimation and Prediction

A Comprehensive Review of Diffusion Models in Smart Agriculture: Progress, Applications, and Challenges

COT-AD: Cotton Analysis Dataset

Built with on top of