The field of robot planning is witnessing a significant shift towards the integration of Large Language Models (LLMs) to improve planning capabilities. Researchers are exploring the potential of LLMs in generating planning domains, adapting to new environments, and enhancing cross-task generalization. Notably, the development of frameworks that leverage LLMs to produce symbolic problem-plan pairs, context-aware code adaptation, and vision-grounded replanning is advancing the field. These innovative approaches enable robots to learn from experience, adapt to novel environments, and improve execution reliability. Noteworthy papers include: Plan2Evolve, which proposes an LLM self-evolving framework for improved planning capability. Memory Transfer Planning, which introduces a framework for LLM-driven context-aware code adaptation. ViReSkill, which pairs vision-grounded replanning with a skill memory for accumulation and reuse. SDA-PLANNER, which enables an adaptive planning paradigm with state-dependency aware and error-aware mechanisms. A Systematic Study of Large Language Models for Task and Motion Planning, which provides insights into the planning capabilities of LLMs in robotics tasks.