The field of robot learning and control is moving towards more generalizable and flexible approaches. Researchers are exploring methods that enable robots to learn from diverse demonstrations, adapt to new situations, and perform complex tasks. One notable direction is the use of large language models (LLMs) as numerical optimizers for robot self-improvement, allowing for iterative learning and adaptation of robot behavior. Another significant trend is the development of hierarchical robot planning frameworks that can integrate kinematic skills and closed-loop motor controllers, enabling the use of diverse pre-learned skills in hierarchical robot planning. Noteworthy papers in this area include CIVIL, which proposes a causal and intuitive visual imitation learning approach that enables robots to learn from humans by indicating task-relevant features, and DeCo, which presents a task decomposition and skill composition framework for zero-shot generalization in long-horizon 3D manipulation tasks. LLM-iTeach is also notable, as it introduces a novel interactive imitation learning framework that utilizes an LLM as an interactive teacher to enhance agent performance. LangWBC and SAS-Prompt are also worth mentioning, as they demonstrate the potential of LLMs in whole-body control and numerical optimization for robot self-improvement.