The field of federated learning is moving towards developing more secure and privacy-preserving methods for collaborative model training. Researchers are exploring innovative approaches to mitigate threats such as data reconstruction attacks, gradient-based attacks, and untargeted attacks. One notable direction is the use of explainable AI and targeted detection and mitigation strategies to identify and address malicious layers within models. Another area of focus is the development of robust aggregation methods that can detect and remove malicious models, thereby defending against untargeted attacks. Additionally, there is a growing interest in integrating differential privacy, homomorphic encryption, and other privacy-preserving techniques into federated learning pipelines to protect sensitive client data. Notable papers include: Random Client Selection on Contrastive Federated Learning for Tabular Data, which presents a comprehensive experimental analysis of gradient-based attacks in CFL environments and evaluates random client selection as a defensive strategy. Nosy Layers, Noisy Fixes: Tackling DRAs in Federated Learning Systems using Explainable AI, which introduces DRArmor, a novel defense mechanism that integrates Explainable AI with targeted detection and mitigation strategies for DRA. FedGraM: Defending Against Untargeted Attacks in Federated Learning via Embedding Gram Matrix, which proposes a novel robust aggregation method designed to defend against untargeted attacks in FL.