Explainability and Transparency in AI Systems

The field of AI research is moving towards increased explainability and transparency, with a focus on developing techniques and frameworks that can provide insights into the decision-making processes of complex models. This trend is driven by the need for trust and accountability in AI systems, particularly in high-stakes applications such as healthcare, finance, and cybersecurity. Recent work has highlighted the importance of integrating explainability into the development of AI models, rather than treating it as an afterthought. Notable papers in this area include:

  • L-XAIDS, which proposes a framework for explainable AI in intrusion detection systems, achieving 85 percent accuracy in classifying attack behavior.
  • Obz AI, a comprehensive software ecosystem that facilitates state-of-the-art explainability and observability for vision AI systems.
  • A Novel Framework for Automated Explain Vision Model, which proposes a pipeline to explain vision models at both the sample and dataset levels using Vision-Language Models.

Sources

Semantic-Aware Ship Detection with Vision-Language Integration

L-XAIDS: A LIME-based eXplainable AI framework for Intrusion Detection Systems

Explain and Monitor Deep Learning Models for Computer Vision using Obz AI

Navigating the EU AI Act: Foreseeable Challenges in Qualifying Deep Learning-Based Automated Inspections of Class III Medical Devices

A Novel Framework for Automated Explain Vision Model Using Vision-Language Models

Built with on top of