Advances in Adversarial Robustness and Explainability

The field of machine learning is moving towards improving the robustness and explainability of deep neural networks. Recent developments have focused on enhancing the resilience of models against adversarial attacks, which pose a significant threat to their reliability. Various approaches have been proposed, including stochastic resonance, concept-based masking, and transfer learning-based methods. Additionally, there is a growing interest in explaining the decisions made by neural networks, with techniques such as provenance networks and frequency-aware model parameter explorers being explored. Noteworthy papers in this area include 'Test-Time Defense Against Adversarial Attacks via Stochastic Resonance of Latent Ensembles' and 'Provenance Networks: End-to-End Exemplar-Based Explainability'. These papers demonstrate the potential of these approaches in improving the robustness and transparency of deep learning models.

Sources

An Investigation into the Performance of Non-Contrastive Self-Supervised Learning Methods for Network Intrusion Detection

Sequence-Preserving Dual-FoV Defense for Traffic Sign and Light Recognition in Autonomous Vehicles

A Statistical Method for Attack-Agnostic Adversarial Attack Detection with Compressive Sensing Comparison

Using Fourier Analysis and Mutant Clustering to Accelerate DNN Mutation Testing

ELMF4EggQ: Ensemble Learning with Multimodal Feature Fusion for Non-Destructive Egg Quality Assessment

Test-Time Defense Against Adversarial Attacks via Stochastic Resonance of Latent Ensembles

Adversarial training with restricted data manipulation

Frequency-Aware Model Parameter Explorer: A new attribution method for improving explainability

Domain-Robust Marine Plastic Detection Using Vision Models

Attack logics, not outputs: Towards efficient robustification of deep neural networks by falsifying concept-based properties

Provenance Networks: End-to-End Exemplar-Based Explainability

Unmasking Puppeteers: Leveraging Biometric Leakage to Disarm Impersonation in AI-based Videoconferencing

Road Damage and Manhole Detection using Deep Learning for Smart Cities: A Polygonal Annotation Approach

LoRA Patching: Exposing the Fragility of Proactive Defenses against Deepfakes

Using predefined vector systems as latent space configuration for neural network supervised training on data with arbitrarily large number of classes

Concept-Based Masking: A Patch-Agnostic Defense Against Adversarial Patch Attacks

Beyond Appearance: Transformer-based Person Identification from Conversational Dynamics

Neuroplastic Modular Framework: Cross-Domain Image Classification of Garbage and Industrial Surfaces

User to Video: A Model for Spammer Detection Inspired by Video Classification Technology

TransFIRA: Transfer Learning for Face Image Recognizability Assessment

Continual Action Quality Assessment via Adaptive Manifold-Aligned Graph Regularization

Built with on top of