The field of large language models (LLMs) is rapidly evolving, with a growing focus on unlearning and fairness. Recent research has highlighted the importance of removing sensitive or harmful content from LLMs, while preserving their overall utility. Several innovative approaches have been proposed, including attention-shifting frameworks, context-aware unlearning, and controllable machine unlearning via gradient pivoting. These methods aim to balance the trade-off between unlearning efficacy and model fidelity, and have shown promising results in experiments. Noteworthy papers in this area include Wisdom is Knowing What not to Say: Hallucination-Free LLMs Unlearning via Attention Shifting, which introduces a novel attention-shifting framework for selective unlearning, and Context-aware Fairness Evaluation and Mitigation in LLMs, which proposes a dynamic, reversible, pruning-based framework for detecting and mitigating bias in LLMs. Overall, the field is moving towards developing more robust and fair LLMs that can adapt to changing contexts and requirements.