Safety and Verification in Neurosymbolic Systems

The field of neurosymbolic systems is moving towards developing more robust and reliable verification methods to ensure safety and compliance with well-defined rules. Recent developments have focused on creating latent spaces that can separate safe and unsafe plans, allowing for more efficient and scalable verification. This has led to improvements in compliance prediction accuracy and the ability to provide probabilistic guarantees on the likelihood of correct verification. Furthermore, there is a growing interest in combining formal methods with deep learning approaches to leverage the strengths of both paradigms. Notable papers in this area include: RepV, which introduces a neurosymbolic verifier that learns a latent space where safe and unsafe plans are linearly separable, and pacSTL, which provides PAC-bounded robustness intervals on the specification level. ScenicProver is also a noteworthy framework that enables compositional probabilistic verification of learning-enabled systems.

Sources

RepV: Safety-Separable Latent Spaces for Scalable Neurosymbolic Plan Verification

pacSTL: PAC-Bounded Signal Temporal Logic from Data-Driven Reachability Analysis

ScenicProver: A Framework for Compositional Probabilistic Verification of Learning-Enabled Systems

Built with on top of