The field of generative models is moving towards safer and more responsible models. Recent works have focused on developing methods to erase unwanted concepts from generative models, making them more reliable and trustworthy. The idea of machine unlearning has also gained significant attention, with researchers proposing innovative approaches to remove unwanted information from trained models. These advancements have the potential to mitigate safety and copyright concerns associated with generative models. Noteworthy papers in this area include those that propose novel concept erasure objectives, gradient-aware immunization techniques, and distributional unlearning frameworks. These papers demonstrate the feasibility of creating generative models that are not only highly performant but also safe and responsible.