Advances in Adversarial Attacks on Deep Learning Models

The field of deep learning is witnessing a significant surge in research focused on adversarial attacks, which involves manipulating input data to mislead neural networks. Recent studies have shown that input transformation-based attacks can enhance the transferability of adversarial examples, while other works have demonstrated the effectiveness of physical adversarial patches in real-world scenarios. Moreover, researchers have explored the vulnerability of multimodal medical retrieval-augmented generation systems to cross-modal transferable adversarial attacks. Another area of interest is the development of novel attack methods, such as slope-based attacks on financial time-series data and targeted manipulation of decision chains in neural networks. Noteworthy papers in this area include: Medusa, which proposes a framework for crafting cross-modal transferable adversarial attacks on multimodal medical retrieval-augmented generation systems, achieving over 90% average attack success rate. V-Attack, a novel method designed for precise local semantic attacks on Large Vision-Language Models, which improves the attack success rate by an average of 36% over state-of-the-art methods. UPA-RFAS, a unified framework that learns a single physical patch in a shared feature space, promoting cross-model transfer and consistently transferring across models, tasks, and viewpoints.

Sources

Enhancing Adversarial Transferability through Block Stretch and Shrink

Robust Physical Adversarial Patches Using Dynamically Optimized Clusters

Adversarial Patch Attacks on Vision-Based Cargo Occupancy Estimation via Differentiable 3D Simulation

Medusa: Cross-Modal Transferable Adversarial Attacks on Multimodal Medical Retrieval-Augmented Generation

Targeted Manipulation: Slope-Based Attacks on Financial Time-Series Data

On the Feasibility of Hijacking MLLMs' Decision Chain via One Perturbation

V-Attack: Targeting Disentangled Value Features for Controllable Adversarial Attacks on LVLMs

Adversarial Confusion Attack: Disrupting Multimodal Large Language Models

When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models

Attention-Guided Patch-Wise Sparse Adversarial Attacks on Vision-Language-Action Models

Built with on top of