Advances in Explainable AI for Maritime Operations

The field of maritime operations is witnessing significant advancements in the development of explainable AI (XAI) systems. Researchers are focusing on creating more transparent and trustworthy models that can provide actionable insights and enhance safety in critical operations. A key direction in this area is the integration of XAI techniques, such as Grad-CAM and LIME, to improve the interpretability of defect detection systems. Furthermore, there is a growing emphasis on designing generalizable and model-agnostic explanation frameworks that can provide highly faithful explanations. Noteworthy papers in this regard include:

  • A study that introduces an adaptive framework for welding defect detection, which systematically evaluates various pre-trained architectures and transfer learning strategies to identify the best-performing model.
  • A proposal for a unified explanation framework that establishes a ground truth for objective evaluation and optimization of XAI methods.

Sources

Advancing Welding Defect Detection in Maritime Operations via Adapt-WeldNet and Defect Detection Interpretability Analysis

Towards Efficient Certification of Maritime Remote Operation Centers

Foundations of Interpretable Models

DeepFaith: A Domain-Free and Model-Agnostic Unified Framework for Highly Faithful Explanations

Built with on top of