The field of machine learning is moving towards improving the robustness and explainability of deep neural networks. Recent developments have focused on enhancing the resilience of models against adversarial attacks, which pose a significant threat to their reliability. Various approaches have been proposed, including stochastic resonance, concept-based masking, and transfer learning-based methods. Additionally, there is a growing interest in explaining the decisions made by neural networks, with techniques such as provenance networks and frequency-aware model parameter explorers being explored. Noteworthy papers in this area include 'Test-Time Defense Against Adversarial Attacks via Stochastic Resonance of Latent Ensembles' and 'Provenance Networks: End-to-End Exemplar-Based Explainability'. These papers demonstrate the potential of these approaches in improving the robustness and transparency of deep learning models.
Advances in Adversarial Robustness and Explainability
Sources
An Investigation into the Performance of Non-Contrastive Self-Supervised Learning Methods for Network Intrusion Detection
A Statistical Method for Attack-Agnostic Adversarial Attack Detection with Compressive Sensing Comparison
ELMF4EggQ: Ensemble Learning with Multimodal Feature Fusion for Non-Destructive Egg Quality Assessment
Attack logics, not outputs: Towards efficient robustification of deep neural networks by falsifying concept-based properties
Unmasking Puppeteers: Leveraging Biometric Leakage to Disarm Impersonation in AI-based Videoconferencing
Road Damage and Manhole Detection using Deep Learning for Smart Cities: A Polygonal Annotation Approach
Using predefined vector systems as latent space configuration for neural network supervised training on data with arbitrarily large number of classes