The field of federated learning is moving towards addressing the vulnerabilities of its decentralized architecture, with a focus on developing innovative defense strategies against various types of attacks. Recent research has highlighted the importance of robust aggregation methods, hyperparameter tuning, and secure aggregation protocols in preventing backdoor attacks and model poisoning. Notably, the development of adaptive attack strategies has also led to the creation of more effective defense mechanisms. Overall, the field is advancing towards more secure and robust federated learning frameworks. Noteworthy papers include: FedThief, which proposes a novel self-centered federated learning attack paradigm, and Hammer and Anvil, which presents a principled defense approach against backdoors in federated learning. Additionally, Stealth by Conformity introduces an adaptive poisoning strategy that evades robust aggregation defenses, while Prototype-Guided Robust Learning proposes a defense against backdoor attacks that overcomes existing limitations.