The field of distributed learning is moving towards developing more efficient and resilient algorithms that can handle the challenges of large-scale distributed systems. Researchers are focusing on mitigating the straggler problem, which can significantly slow down the training process. New approaches, such as unbalanced update mechanisms and gradient coding schemes, are being proposed to address this issue. Additionally, there is a growing interest in developing more efficient gradient compression methods, such as Top-K compressors, to reduce communication overhead. These advancements have the potential to significantly improve the scalability and efficiency of distributed learning systems. Notable papers in this area include: Towards Straggler-Resilient Split Federated Learning, which proposes a straggler-resilient algorithm for split federated learning, and An All-Reduce Compatible Top-K Compressor, which introduces a new compressor that aligns sparsity patterns across nodes, enabling efficient All-Reduce operations.