Safety and Uncertainty in Autonomous Systems

The field of autonomous systems is moving towards increased emphasis on safety and uncertainty quantification. Researchers are developing new techniques to ensure the reliability and robustness of autonomous systems, particularly in safety-critical applications such as robotic surgery, aviation, and hazardous environment mitigation. A key direction is the integration of probabilistic models and uncertainty propagation methods to provide formal safety guarantees. Another important area is the development of calibrated prediction frameworks for safety evaluation, which can quantify the confidence of safety predictions and provide reliable statistical bounds. Noteworthy papers in this area include:

  • Robust-Sub-Gaussian Model Predictive Control for Safe Ultrasound-Image-Guided Robotic Spinal Surgery, which introduces a novel characterization of estimation errors using sub-Gaussian noise and develops a new MPC framework for linear systems.
  • How Safe Will I Be Given What I Saw, which presents a framework for calibrated safety prediction in end-to-end vision-controlled systems and introduces a calibration mechanism to quantify prediction confidence.

Sources

Robust-Sub-Gaussian Model Predictive Control for Safe Ultrasound-Image-Guided Robotic Spinal Surgery

Runtime Verification for LTL in Stochastic Systems

Autonomous Air-Ground Vehicle Operations Optimization in Hazardous Environments: A Multi-Armed Bandit Approach

How Safe Will I Be Given What I Saw? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy

Predictive Uncertainty for Runtime Assurance of a Real-Time Computer Vision-Based Landing System

Vision-driven River Following of UAV via Safe Reinforcement Learning using Semantic Dynamics Model

Feedback stabilization of a nanoparticle at the intensity minimum of an optical double-well potential

Built with on top of