The field of AI and robotics is moving towards developing more robust and reliable systems that can handle uncertainty and provide guarantees of safety and trust. This is being achieved through the development of new frameworks and methods that can address indeterminate satisfaction, decomposition of satisfaction signals, and propagation of trust across the system. The use of incremental reachability analysis, Boolean interval arithmetic, and subjective logic are some of the innovative approaches being explored. These advancements have the potential to significantly improve the performance and reliability of AI systems in safety-critical applications.
Noteworthy papers include: Uncertainty Removal in Verification of Nonlinear Systems against Signal Temporal Logic via Incremental Reachability Analysis, which presents a framework for verification of STL specifications under uncertainty. Human-AI Teaming Under Deception, which demonstrates the use of a collaborative Brain-Computer Interface to protect human-AI teams from AI-induced errors. Gated Uncertainty-Aware Runtime Dual Invariants for Neural Signal-Controlled Robotics, which presents a framework for real-time neuro-symbolic verification for neural signal-controlled robotics. PaTAS: A Parallel System for Trust Propagation in Neural Networks Using Subjective Logic, which introduces a framework for modeling and propagating trust in neural networks.