The field of robotics is witnessing significant advancements in learning and control, with a focus on developing more efficient, adaptive, and generalizable methods. Recent developments have highlighted the importance of data-driven approaches, such as learning from demonstrations, imitation learning, and reinforcement learning, in enabling robots to perform complex tasks and interact with their environment effectively. Notably, the integration of computer vision and machine learning techniques has led to improved performance in tasks like object manipulation, grasping, and navigation. Furthermore, researchers are exploring the use of video generation and video-based learning to enhance robot policy learning and improve robustness to changes in the environment. The development of more sophisticated control methods, such as model predictive control and robust tube model predictive control, has also enabled robots to perform agile and dynamic maneuvers. Some noteworthy papers in this area include: Video Generators are Robot Policies, which proposes a modular framework for learning robot policies from video generation, and Aerobatic maneuvers in insect-scale flapping-wing aerial robots via deep-learned robust tube model predictive control, which demonstrates insect-like flight agility and robustness in a small flapping-wing robot. Overall, these advancements have the potential to significantly impact various applications, from robotics and automation to healthcare and transportation.