The field of federated learning is moving towards developing more efficient and private methods for collaborative model training. Recent research has focused on improving the communication efficiency and robustness of federated learning algorithms, particularly in scenarios with non-IID data distributions. Notably, innovative approaches such as decentralized cross-silo federated learning and soft-label caching have shown promising results in reducing communication costs while maintaining model accuracy. Additionally, there is a growing interest in applying federated learning to medical imaging tasks, where privacy preservation is crucial. Furthermore, researchers are exploring new frameworks for federated data ecosystems that prioritize data sovereignty, governance, and interoperability. Overall, the field is advancing towards more secure, efficient, and scalable federated learning solutions. Noteworthy papers include: DeSIA, which proposes an attribute inference attack framework against fixed aggregate statistics, and UnifyFL, which develops a trust-based cross-silo FL framework. Also, Soft-Label Caching and Sharpening for Communication-Efficient Federated Distillation achieves up to 50% reduction in communication costs. A Unified Benchmark of Federated Learning with Kolmogorov-Arnold Networks for Medical Imaging demonstrates the effectiveness of KAN in federated environments.