The field of adversarial attacks and defenses is rapidly evolving, with a focus on generating natural and imperceptible adversarial examples. Researchers are exploring new approaches to improve the quality and effectiveness of these attacks, including the use of diffusion models and perceptibility-aware localization and perturbation optimization schemes. Meanwhile, defense strategies are being developed to counter these attacks, such as feature-aware adversarial training frameworks that adaptively assign weaker or stronger adversaries to different classes. Another important area of research is the evaluation of unrestricted adversarial examples, which requires human evaluations to verify their authenticity. Noteworthy papers include: ScoreAdv, which introduces a novel approach for generating natural adversarial examples using diffusion models, and IAP, which generates highly invisible adversarial patches based on perceptibility-aware localization and perturbation optimization schemes. TRIX is also notable, as it presents a feature-aware adversarial training framework that reduces inter-class robustness disparities. Lastly, SCOOTER provides a unified framework for evaluating unrestricted adversarial examples, including best-practice guidelines for crowd-study power and open-source software tools.