The field of diffusion models is moving towards more personalized and controlled generation, with a focus on unlearning and removing specific knowledge or concepts from pretrained models. Recent developments have introduced novel approaches to address the limitations of existing methods, such as the use of Low-Rank Adaptation (LoRA) and Classifier-Free Guidance (CFG) to enable precise control over the unlearning process. Additionally, there is a growing interest in leveraging the robustness of language modality to guide the adaptation of vision models, particularly in unsupervised domain adaptation tasks. Noteworthy papers in this area include UnGuide, which achieves controlled concept removal and retains the expressive power of diffusion models, and TRUST, which exploits the robustness of language modality to guide the adaptation of a vision model and sets a new state-of-the-art on classical and complex domain shifts. Other notable papers include CoAR, which enables efficient and effective personalized text-to-image generation, and SemPT, which tackles the generalization challenge in vision-language models by leveraging shared attribute-level knowledge across categories.