Explainable AI and Interpretable Models in Sensing Systems and Industrial Applications

The field of machine learning and artificial intelligence is moving towards the development of more interpretable and explainable models, particularly in sensing systems and industrial applications. This trend is driven by the need for transparency, accountability, and trust in complex decision-making systems. Recent research has focused on integrating techniques such as neuro-symbolic rule learning, concept bottleneck models, and Shapley values to provide insights into the decision-making processes of black-box models. These approaches have been applied to various domains, including human activity recognition, fault detection, and time-series prediction. The results have shown that interpretable models can achieve competitive performance while providing actionable insights and explanations. Noteworthy papers in this area include those that propose innovative methods for explainable AI, such as DeltaSHAP, which explains prediction evolutions in online patient monitoring, and SEZ-HARN, which recognizes human activities while providing skeleton videos to explain its decision-making process. These developments are expected to have a significant impact on the adoption of AI in industries where interpretability and transparency are crucial.

Sources

Learning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar Hand Gesture Recognition

ViFusionTST: Deep Fusion of Time-Series Image Representations from Load Signals for Early Bed-Exit Prediction

Intervening in Black Box: Concept Bottleneck Model for Enhancing Human Neural Network Mutual Understanding

BenchMake: Turn any scientific data set into a reproducible benchmark

SEZ-HARN: Self-Explainable Zero-shot Human Activity Recognition Network

Towards transparent and data-driven fault detection in manufacturing: A case study on univariate, discrete time series

Interpretable AI for Time-Series: Multi-Model Heatmap Fusion with Global Attention and NLP-Generated Explanations

Prediction of Freezing of Gait in Parkinsons Disease using Explainable AI and Federated Deep Learning for Wearable Sensors

Discovery of Fatigue Strength Models via Feature Engineering and automated eXplainable Machine Learning applied to the welded Transverse Stiffener

DeltaSHAP: Explaining Prediction Evolutions in Online Patient Monitoring with Shapley Values

Detection of Disengagement from Voluntary Quizzes: An Explainable Machine Learning Approach in Higher Distance Education

Built with on top of