The field of federated learning and homomorphic encryption is rapidly evolving, with a focus on developing robust and efficient methods to protect user privacy and prevent attacks. Recent research has explored innovative approaches to address the challenges of Byzantine attacks, heterogeneity, and high-dimensional models in federated learning. Notably, new mechanisms have been proposed to identify and exclude poisoned models, and to aggregate client updates while preserving collaborative signals. Additionally, advancements in homomorphic encryption have led to the development of more efficient and scalable frameworks, including binary variants of existing schemes and selective encryption methods. These developments have significant implications for the deployment of secure and private machine learning models in real-world applications. Noteworthy papers include: FedGuard, which proposes a novel federated learning mechanism to defend against Byzantine attacks, and SenseCrypt, which introduces a sensitivity-guided selective homomorphic encryption framework for cross-device federated learning. SelectiveShield is also notable for its lightweight hybrid defense framework that adaptively integrates selective homomorphic encryption and differential privacy. Furthermore, PrivDFS presents a new paradigm for private inference that replaces a single exposed representation with distributed feature sharing, ensuring strong privacy guarantees while maintaining model utility.