The field of maritime operations is witnessing significant advancements in the development of explainable AI (XAI) systems. Researchers are focusing on creating more transparent and trustworthy models that can provide actionable insights and enhance safety in critical operations. A key direction in this area is the integration of XAI techniques, such as Grad-CAM and LIME, to improve the interpretability of defect detection systems. Furthermore, there is a growing emphasis on designing generalizable and model-agnostic explanation frameworks that can provide highly faithful explanations. Noteworthy papers in this regard include:
- A study that introduces an adaptive framework for welding defect detection, which systematically evaluates various pre-trained architectures and transfer learning strategies to identify the best-performing model.
- A proposal for a unified explanation framework that establishes a ground truth for objective evaluation and optimization of XAI methods.