Advances in Autonomous System Assurance

The field of autonomous systems is moving towards increased reliance on large language models (LLMs) and runtime verification to ensure safety and trustworthiness. Researchers are exploring the integration of LLMs with formal methods to provide strong guarantees and handle uncertainty. This shift is driven by the need to address the challenges posed by learning-enabled components and open environments in autonomous systems. Notable papers in this area include:

  • Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems, which argues for a symbiotic integration of runtime verification and LLMs.
  • SVBRD-LLM: Self-Verifying Behavioral Rule Discovery for Autonomous Vehicle Identification, which proposes a framework for discovering and verifying behavioral rules for autonomous vehicles using LLMs.

Sources

Architecting software monitors for control-flow anomaly detection through large language models and conformance checking

Proceedings Seventh International Workshop on Formal Methods for Autonomous Systems

Watchdogs and Oracles: Runtime Verification Meets Large Language Models for Autonomous Systems

SVBRD-LLM: Self-Verifying Behavioral Rule Discovery for Autonomous Vehicle Identification

What Does It Take to Get Guarantees? Systematizing Assumptions in Cyber-Physical Systems

Built with on top of