The field of autonomous navigation and control is witnessing significant advancements, driven by innovations in reinforcement learning, simulation, and real-world policy adaptation. Researchers are actively exploring new frameworks and techniques to improve the reliability and efficiency of autonomous systems, particularly in challenging environments such as unstructured terrains and dynamic settings. A key direction is the development of sim-to-real frameworks that enable seamless transfer of policies from simulation to real-world deployment, addressing the notorious sim-to-real gap. Another area of focus is the design of robust and adaptive control policies that can cope with varying morphologies, action spaces, and environmental conditions. Noteworthy papers in this area include: Sim2Dust, which presents a complete sim-to-real framework for dynamic waypoint tracking on granular media, and No More Blind Spots, which introduces a learning framework for vision-based omnidirectional bipedal locomotion. Robot Trains Robot is also notable for its novel framework that enables efficient long-term real-world humanoid training with minimal human intervention. Categorical Policies and Beyond Fixed Morphologies are also highlighted for their contributions to multimodal policy learning and morphological generalization.