Advancements in Adversarial Attacks and Recommender Systems

The field of reinforcement learning and recommender systems is witnessing significant developments, with a focus on improving robustness against adversarial attacks and enhancing recommendation accuracy. Researchers are exploring novel attack methods that can bypass existing defenses, such as diffusion-based state perturbations and road-style adversarial creation attacks. These attacks have been shown to be effective in compromising the performance of reinforcement learning agents and 3D object detectors in autonomous driving scenarios. In contrast, new approaches are being proposed to improve the accuracy and diversity of recommender systems, including the use of hybrid intent-based dual constraints and bi-level constrained reinforcement paradigms. Noteworthy papers include:

  • Diffusion Guided Adversarial State Perturbations in Reinforcement Learning, which proposes a novel policy-agnostic diffusion-based state perturbation attack.
  • Invisible Triggers, Visible Threats, which introduces a road-style adversarial creation attack for visual 3D detection in autonomous driving.
  • Bid Farewell to Seesaw, which presents a hybrid intent-based dual constraint framework for accurate long-tail session-based recommendation.
  • Potent but Stealthy, which proposes a constrained reinforcement driven attack for sequential recommendation via bi-level constrained reinforcement paradigm.

Sources

Diffusion Guided Adversarial State Perturbations in Reinforcement Learning

Invisible Triggers, Visible Threats! Road-Style Adversarial Creation Attack for Visual 3D Detection in Autonomous Driving

Bid Farewell to Seesaw: Towards Accurate Long-tail Session-based Recommendation via Dual Constraints of Hybrid Intents

Potent but Stealthy: Rethink Profile Pollution against Sequential Recommendation via Bi-level Constrained Reinforcement Paradigm

Built with on top of