Advances in Explainable AI for Cybersecurity

The field of cybersecurity is moving towards the development of more explainable and robust models, particularly in the areas of graph neural networks and anomaly detection. Researchers are focusing on creating models that can provide verifiable explanations for their predictions, which is essential for building trust in these systems. The use of techniques such as counterfactual explanations, graph neural networks, and ensemble learning is becoming increasingly popular. These approaches have shown promising results in detecting and explaining various types of attacks, including Advanced Persistent Threats and replay attacks. Noteworthy papers in this area include ProvX, which introduces a counterfactual explanation framework for GNN-based security models, and MirGuard, which proposes a robust anomaly detection framework that combines logic-aware multi-view augmentation with contrastive representation learning. Additionally, the paper on Explainable Ensemble Learning for Graph-Based Malware Detection presents a novel stacking ensemble framework for graph-based malware detection and explanation. Overall, the field is moving towards the development of more transparent and reliable models that can provide valuable insights into their decision-making processes.

Sources

ProvX: Generating Counterfactual-Driven Attack Explanations for Provenance-Based Detection

ScamDetect: Towards a Robust, Agnostic Framework to Uncover Threats in Smart Contracts

An Unsupervised Deep XAI Framework for Localization of Concurrent Replay Attacks in Nuclear Reactor Signals

Exact Verification of Graph Neural Networks with Incremental Constraint Solving

Explainable Ensemble Learning for Graph-Based Malware Detection

MirGuard: Towards a Robust Provenance-based Intrusion Detection System Against Graph Manipulation Attacks

A Novel Study on Intelligent Methods and Explainable AI for Dynamic Malware Analysis

Built with on top of