The field of reinforcement learning is moving towards more complex and realistic scenarios, with a focus on goal-conditioned reinforcement learning (GCRL). GCRL allows agents to learn diverse objectives using a unified policy, and recent research has explored various aspects of this field, including new goal representation methods, improved algorithms, and applications to real-world problems. One of the key directions is the development of more efficient and effective goal representation methods, such as mask-based goal representations and dual goal representations. These methods have shown promising results in improving the performance of GCRL agents. Another important area of research is the development of new algorithms and techniques, such as Automaton Constrained Q-Learning and Test-Time Graph Search, which can handle complex tasks and safety constraints. These advances have the potential to enable GCRL agents to be applied to a wide range of real-world problems, including robotics, autonomous vehicles, and healthcare. Notable papers in this area include: Automaton Constrained Q-Learning, which proposes a new algorithm that combines goal-conditioned value learning with automaton-guided reinforcement to handle complex tasks and safety constraints. General and Efficient Visual Goal-Conditioned Reinforcement Learning using Object-Agnostic Masks, which introduces a mask-based goal representation system that provides object-agnostic visual cues to the agent, enabling efficient learning and superior generalization.