Explainability in Deep Learning

The field of deep learning is moving towards increased explainability, with a focus on developing models that are not only accurate but also transparent and interpretable. This is particularly important in critical fields such as medical image analysis and financial fraud detection, where trust and accountability are essential. Recent research has led to the development of novel architectures and techniques that embed explainability into the training process, such as the Deeply Explainable Artificial Neural Network (DxANN). Additionally, there is a growing interest in visualizing and understanding the computations of convolutional neural networks, including the development of new methods for visualizing 3D convolutional kernels. Noteworthy papers in this area include the Deeply Explainable Artificial Neural Network, which presents a novel deep learning architecture that embeds explainability into the training process, and Financial Fraud Detection Using Explainable AI and Stacking Ensemble Methods, which proposes a fraud detection framework that combines a stacking ensemble of gradient boosting models with explainable artificial intelligence techniques.

Sources

Exploring Convolutional Neural Networks for Rice Grain Classification: An Explainable AI Approach

Deeply Explainable Artificial Neural Network

Feature Visualization in 3D Convolutional Neural Networks

SHAP-based Explanations are Sensitive to Feature Representation

On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction

Financial Fraud Detection Using Explainable AI and Stacking Ensemble Methods

Built with on top of