The field of deep learning is witnessing a significant surge in research focused on adversarial attacks, which involves manipulating input data to mislead neural networks. Recent studies have shown that input transformation-based attacks can enhance the transferability of adversarial examples, while other works have demonstrated the effectiveness of physical adversarial patches in real-world scenarios. Moreover, researchers have explored the vulnerability of multimodal medical retrieval-augmented generation systems to cross-modal transferable adversarial attacks. Another area of interest is the development of novel attack methods, such as slope-based attacks on financial time-series data and targeted manipulation of decision chains in neural networks. Noteworthy papers in this area include: Medusa, which proposes a framework for crafting cross-modal transferable adversarial attacks on multimodal medical retrieval-augmented generation systems, achieving over 90% average attack success rate. V-Attack, a novel method designed for precise local semantic attacks on Large Vision-Language Models, which improves the attack success rate by an average of 36% over state-of-the-art methods. UPA-RFAS, a unified framework that learns a single physical patch in a shared feature space, promoting cross-model transfer and consistently transferring across models, tasks, and viewpoints.