The field of federated learning is rapidly advancing, with a strong focus on improving security and privacy. Recent developments have highlighted the importance of protecting against gradient inversion attacks, malicious clients, and backdoor attacks. Researchers are proposing innovative defense mechanisms, such as shadow modeling, dimensionality reduction, and reputation systems, to mitigate these threats. Additionally, there is a growing interest in developing frameworks that can detect and prevent malicious behavior, such as anomaly detection and intrusion detection systems. The use of decentralized finance platforms and automated market makers is also being explored to create more flexible and scalable reward distribution systems. Notable papers in this area include SecureFed, which presents a two-phase framework for detecting malicious clients, and SPA, which proposes a novel backdoor attack framework that leverages feature-space alignment. These advancements demonstrate the field's commitment to addressing the unique security challenges introduced by federated learning and ensuring the privacy and integrity of sensitive data.