Advances in Adversarial Robustness and Purification

The field of adversarial robustness and purification is rapidly evolving, with a focus on developing efficient and reliable methods to defend against adversarial attacks. Recent research has highlighted the importance of synergy among diverse augmentation strategies, rather than relying on a single method, to enhance robustness. Additionally, there is a growing interest in physically realizable and transferable adversarial patch attacks, which can pose significant threats in real-world scenarios. Noteworthy papers in this area include DBLP, which proposes a novel diffusion-based framework for adversarial purification, and PhysPatch, which introduces a physically realizable and transferable adversarial patch framework tailored for multimodal large language models-based autonomous driving systems. Other notable works, such as AFOG and UAA, have also made significant contributions to the field, demonstrating the efficacy of attention-focused offensive gradient attacks and universal adversarial augmenters, respectively.

Sources

DBLP: Noise Bridge Consistency Distillation For Efficient And Reliable Adversarial Purification

Revisiting Adversarial Patch Defenses on Object Detectors: Unified Evaluation, Large-Scale Dataset, and New Insights

Backdoor Attacks on Deep Learning Face Detection

Pulse Shape Discrimination Algorithms: Survey and Benchmark

Adversarial Attention Perturbations for Large Object Detection Transformers

The Power of Many: Synergistic Unification of Diverse Augmentations for Efficient Adversarial Robustness

PhysPatch: A Physically Realizable and Transferable Adversarial Patch Attack for Multimodal Large Language Models-based Autonomous Driving Systems

Physical Adversarial Camouflage through Gradient Calibration and Regularization

Keep It Real: Challenges in Attacking Compression-Based Adversarial Purification

FS-IQA: Certified Feature Smoothing for Robust Image Quality Assessment

Built with on top of