The field of reinforcement learning is moving towards more robust and efficient methods for handling offline data. Recent research has focused on addressing the challenges of corrupted data, improving exploration in discrete state-space environments, and accelerating model-based reinforcement learning. One notable direction is the use of diffusion models to tackle data corruption in offline reinforcement learning, which has shown promising results in enhancing data quality and improving the robustness of offline RL. Another area of research is the development of modular and decoupled training methods, which can improve sample efficiency and final performance in offline RL. Additionally, there is a growing interest in scaling up offline RL algorithms to handle large and complex datasets, with techniques such as horizon reduction and weight normalization showing potential. Noteworthy papers in this area include: ADG, which proposes a novel approach to dataset recovery using diffusion models, Modular Diffusion Policy Training, which introduces a modular training method that decouples guidance and diffusion, and Horizon Reduction Makes RL Scalable, which demonstrates the effectiveness of horizon reduction techniques in enhancing scalability. These advances have the potential to significantly improve the performance and efficiency of offline reinforcement learning algorithms.