The field of robotics is moving towards more autonomous and task-execution capable systems. Recent developments focus on integrating large language models, knowledge graphs, and vision-language-action frameworks to improve robot reasoning, planning, and natural language interaction. Noteworthy papers include Learn from What We HAVE, which introduces a novel History-Aware VErifier to disambiguate uncertain scenarios online, and ConceptBot, which combines Large Language Models and Knowledge Graphs to generate feasible and risk-aware plans. Additionally, papers like Robix and FPC-VLA propose unified models for robot interaction, reasoning, and planning, and frameworks with supervisors for failure prediction and correction, respectively. These advancements aim to improve the reliability and generalization of robotic systems in unstructured environments.