The field of artificial intelligence is moving towards more sophisticated and human-like interactions, with a focus on natural language-driven route planning and robot control. Recent developments have seen the integration of large language models (LLMs) into various systems, enabling more efficient and effective decision-making. One of the key directions is the use of LLMs to improve route planning, taking into account user preferences and constraints. Another area of research is the development of systems that can understand and interpret human behavior, such as object ownership and gesture recognition, to enable more seamless human-robot interaction. These advancements have the potential to revolutionize the way we interact with technology and each other. Notable papers in this area include: LLMAP, which introduces a novel LLM-Assisted route Planning system that employs an LLM-as-Parser to comprehend natural language and extract user preferences. KGTB, which proposes a Knowledge Graph Tokenization approach for behavior-aware generative next POI recommendation. ActOwL, which enables robots to actively generate and ask ownership-related questions to users. Multi-Robot Task Planning, which leverages LLMs and spatial concepts to decompose natural language instructions into subtasks and allocate them to multiple robots. GestOS, which interprets hand gestures semantically and dynamically distributes tasks across multiple robots based on their capabilities and current state.