The field of adversarial robustness and purification is rapidly evolving, with a focus on developing efficient and reliable methods to defend against adversarial attacks. Recent research has highlighted the importance of synergy among diverse augmentation strategies, rather than relying on a single method, to enhance robustness. Additionally, there is a growing interest in physically realizable and transferable adversarial patch attacks, which can pose significant threats in real-world scenarios. Noteworthy papers in this area include DBLP, which proposes a novel diffusion-based framework for adversarial purification, and PhysPatch, which introduces a physically realizable and transferable adversarial patch framework tailored for multimodal large language models-based autonomous driving systems. Other notable works, such as AFOG and UAA, have also made significant contributions to the field, demonstrating the efficacy of attention-focused offensive gradient attacks and universal adversarial augmenters, respectively.
Advances in Adversarial Robustness and Purification
Sources
Revisiting Adversarial Patch Defenses on Object Detectors: Unified Evaluation, Large-Scale Dataset, and New Insights
The Power of Many: Synergistic Unification of Diverse Augmentations for Efficient Adversarial Robustness