The field of federated learning is moving towards addressing the challenges of communication overhead, data heterogeneity, and privacy preservation. Researchers are exploring innovative strategies such as explanation-guided pruning, graph federated learning, and hierarchical federated learning to improve the efficiency and effectiveness of federated learning models. Noteworthy papers include FedX, which proposes a novel pruning strategy to reduce communication overhead, and GFed-PP, which adapts to different privacy requirements while improving recommendation performance. Other notable works include HeteRo-Select, which maintains high performance and ensures long-term training stability, and SHeRL-FL, which integrates split learning and hierarchical model aggregation to reduce communication overhead.