The field of autonomous systems is moving towards increased reliance on large language models (LLMs) and runtime verification to ensure safety and trustworthiness. Researchers are exploring the integration of LLMs with formal methods to provide strong guarantees and handle uncertainty. This shift is driven by the need to address the challenges posed by learning-enabled components and open environments in autonomous systems. Notable papers in this area include:
- Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems, which argues for a symbiotic integration of runtime verification and LLMs.
- SVBRD-LLM: Self-Verifying Behavioral Rule Discovery for Autonomous Vehicle Identification, which proposes a framework for discovering and verifying behavioral rules for autonomous vehicles using LLMs.