Advancements in Adversarial Attacks and Defenses

The field of adversarial attacks and defenses is rapidly evolving, with a focus on developing more sophisticated and targeted attacks, as well as improving the robustness of machine learning models. Recent research has explored the use of constrained adversarial perturbations, which take into account domain-specific constraints to create more realistic and effective attacks. Additionally, there has been a push towards developing more efficient and scalable algorithms for generating adversarial examples, such as those using gradient-based optimization strategies.

Noteworthy papers in this area include: Constrained Adversarial Perturbation, which proposes an efficient algorithm for generating constrained adversarial perturbations that achieves higher attack success rates while reducing runtime. A Versatile Framework for Designing Group-Sparse Adversarial Attacks, which introduces a differentiable optimization framework for generating structured, sparse adversarial perturbations that improves interpretability and helps identify robust versus non-robust features. The Black Tuesday Attack, which investigates the possibility of causing a stock market crash via small manipulations of individual stock values that together realize an adversarial example to financial forecasting models.

Sources

Constrained Adversarial Perturbation

Colliding with Adversaries at ECML-PKDD 2025 Adversarial Attack Competition 1st Prize Solution

Colliding with Adversaries at ECML-PKDD 2025 Model Robustness Competition 1st Prize Solution

A Versatile Framework for Designing Group-Sparse Adversarial Attacks

The Black Tuesday Attack: how to crash the stock market with adversarial examples to financial forecasting models

A New Type of Adversarial Examples

Built with on top of