Advances in Adversarial Robustness and Explainability

The field of adversarial robustness and explainability is rapidly evolving, with a focus on developing innovative methods to improve the security and transparency of machine learning systems. Recent research has highlighted the importance of evaluating the efficacy of black-box adversarial attacks in real-world scenarios, as well as the need for more robust and dynamic attack methods. Additionally, there is a growing interest in exploring the vulnerabilities of containerization systems and developing novel approaches to exploit these vulnerabilities. Furthermore, researchers are working on improving the robustness of saliency-based explanations and developing more effective methods for anomaly detection and localization. Notable papers in this area include those that propose novel attack methods, such as SwitchPatch, which enables dynamic and controllable attack outcomes, and AngleRoCL, which enhances the angle robustness of text-to-image adversarial patches. Other notable papers include those that introduce new methods for improving the robustness of class activation maps, such as DiffGradCAM, and those that develop more effective approaches to anomaly detection and localization, such as PatchGuard.

Sources

How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real World

Poisoning Behavioral-based Worker Selection in Mobile Crowdsensing using Generative Adversarial Networks

gh0stEdit: Exploiting Layer-Based Access Vulnerability Within Docker Container Images

One Patch to Rule Them All: Transforming Static Patches into Dynamic Attacks in the Physical World

DiffGradCAM: A Universal Class Activation Map Resistant to Adversarial Training

PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo Anomalies

AngleRoCL: Angle-Robust Concept Learning for Physically View-Invariant T2I Adversarial Patches

Built with on top of