Advancements in Adversarial Attacks and Data Augmentation

The field of computer vision is witnessing significant advancements in adversarial attacks and data augmentation techniques. Researchers are focusing on developing more sophisticated methods to improve the robustness of deep learning models. Notably, the use of superpixel-based approaches is gaining traction, as seen in the development of methods that leverage superpixels to enhance black-box adversarial attacks and improve data augmentation techniques. Additionally, there is a growing interest in physical adversarial attacks, which can compromise the safety and security of intelligent systems. The development of stealth-aware adversarial patch methods and physical ID-transfer attacks highlights the need for more robust and secure computer vision systems. Noteworthy papers include: LGCOAMix, which proposes an efficient context-aware and object-part-aware superpixel-based grid blending method for data augmentation. TESP-Attack, which introduces a novel stealth-aware adversarial patch method for traffic sign classification. SSR, which proposes a semantic and spatial rectification method to address limitations in CLIP-based weakly supervised semantic segmentation approaches. AdvTraj, which presents the first online and physical ID-manipulation attack against tracking-by-detection multi-object tracking. Superpixel Attack, which enhances black-box adversarial attacks with image-driven division areas. BlackCAtt, which uses minimal, causally sufficient pixel sets to construct explainable, imperceptible, reproducible, architecture-agnostic attacks on object detectors.

Sources

Local and Global Context-and-Object-part-Aware Superpixel-based Data Augmentation for Deep Visual Recognition

The Outline of Deception: Physical Adversarial Attacks on Traffic Signs Using Edge Patches

SSR: Semantic and Spatial Rectification for CLIP-based Weakly Supervised Segmentation

Physical ID-Transfer Attacks against Multi-Object Tracking via Adversarial Trajectory

Superpixel Attack: Enhancing Black-box Adversarial Attack with Image-driven Division Areas

Out-of-the-box: Black-box Causal Attacks on Object Detectors

Built with on top of