The field of federated learning is moving towards addressing the growing concern of security and privacy. Recent developments have focused on defending against various types of attacks, including membership inference attacks, Byzantine attacks, and data poisoning attacks. Researchers are proposing novel defense mechanisms, such as representative-attention and data reconstruction attacks, to mitigate these threats. Furthermore, there is a growing interest in developing scalable and unified methods for membership inference and defense. Notably, papers such as 'A Taxonomy of Attacks and Defenses in Split Learning' and 'Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning' have made significant contributions to the field. 'Remote Rowhammer Attack using Adversarial Observations on Federated Learning Clients' and 'Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning' are also noteworthy for their innovative approaches to exploiting vulnerabilities in federated learning systems.