The field of robotics is moving towards greater integration with large language models (LLMs) to enable more efficient and effective automation of complex tasks. This integration is being driven by the need for robots to be able to understand and execute high-level instructions, and to provide feedback to humans on their progress. Recent research has focused on developing architectures that combine LLMs with automated planning and symbolic reasoning to achieve this goal. Notable papers in this area include:
- Defining and Monitoring Complex Robot Activities via LLMs and Symbolic Reasoning, which introduces a general architecture for integrating LLMs with automated planning.
- AD-VF: LLM-Automatic Differentiation Enables Fine-Tuning-Free Robot Planning from Formal Methods Feedback, which proposes a fine-tuning-free framework for refining LLM-driven planning using formal verification feedback.