The field of deep learning is witnessing significant advancements in adversarial attacks and defenses. Researchers are exploring new threat models, attack strategies, and defense mechanisms to enhance the security and robustness of deep learning systems. A key direction in this area is the development of more sophisticated attack methods, such as data reconstruction attacks, backdoor attacks, and adversarial patch attacks, which can compromise the integrity of deep learning models. On the other hand, researchers are also proposing innovative defense strategies, including diffusion denoised smoothing, adversarial training, and purification of adversarial patches, to counter these threats. Noteworthy papers in this area include BadSR, which improves the stealthiness of poisoned HR images in backdoor attacks, and SuperPure, which proposes a pixel-wise masking scheme to purify images from adversarial patches. Overall, the field is moving towards more robust and secure deep learning systems, and ongoing research is crucial to stay ahead of emerging threats.