The field of Explainable AI is moving towards developing more transparent and trustworthy models. A key focus area is the creation of formal explanations for AI decisions, with a particular emphasis on finding the most general explanations that cover a larger portion of the input space. Another significant direction is the development of multimodal models that can accurately explain both the extracted features and their integration without compromising predictive performance. Noteworthy papers in this regard include:
- A paper introducing a framework to find the most general abductive explanation for an AI decision, which provides the explanation with the broadest applicability.
- A paper presenting MultiFIX, an interpretability-driven multimodal data fusion pipeline that combines deep learning components with interpretable blocks to make the final prediction.
- A paper proposing Concept Rule Learner (CRL), a novel framework to learn Boolean logical rules from binarized visual concepts for medical image classification, which achieves competitive performance while significantly improving generalizability.
- A paper introducing BACON, a novel framework for automatically training explainable AI models for decision making problems using graded logic, which provides high predictive accuracy and full structural transparency.