The field of federated learning is moving towards addressing the growing concerns of data privacy and security. Recent developments have highlighted the importance of protecting against attacks that exploit gradient exchanges during the unlearning process, as well as the need for effective backdoor unlearning methods. Researchers are exploring novel approaches to detect and remove backdoor threats while preserving model performance. Furthermore, the development of stealthy backdoor attacks that operate across multiple domains has emphasized the need for stronger defenses. Noteworthy papers in this area include: DRAGD, which introduces a novel attack that exploits gradient discrepancies to reconstruct forgotten data. BURN, which proposes a defense framework that integrates false correlation decoupling, progressive data refinement, and model purification to remove backdoor threats. 3S-Attack, which presents a novel backdoor attack that is stealthy across the spatial, spectral, and semantic domains. How to Protect Models against Adversarial Unlearning, which investigates the problem of adversarial unlearning and proposes a new method to protect model performance from undesirable side effects.