The field of artificial intelligence is witnessing significant developments in enhancing the robustness and security of deep learning models, particularly in the areas of adversarial attacks and graph neural networks. Researchers are actively exploring innovative methods to bolster the resilience of these models against various types of attacks, including adversarial perturbations, poisoning attacks, and topology inference attacks. A key direction in this field is the development of unified frameworks for adversarial training, which can effectively improve the robustness and generalization abilities of graph neural networks. Another important area of research is the design of efficient defense mechanisms against adversarial patches and targeted attacks, which can have severe consequences in real-world applications such as object detection and traffic sign recognition. Noteworthy papers in this area include: Unifying Adversarial Perturbation for Graph Neural Networks, which proposes a novel method called PerturbEmbedding that integrates adversarial perturbation and training to enhance the resilience of graph neural networks. Sequential Difference Maximization, which presents a gradient-based attack method that achieves stronger attack performance and higher attack cost-effectiveness compared to previous state-of-the-art methods. Targeted Physical Evasion Attacks in the Near-Infrared Domain, which proposes a novel attack to generate both targeted and untargeted adversarial infrared perturbations, and demonstrates its effectiveness in various attack scenarios. AutoDetect, which introduces a simple, fast, and lightweight autoencoder-based method for detecting poisoning attacks on object detection systems. DisPatch, which presents a diffusion-based defense framework for object detection that can effectively disarm adversarial patches and preserve the integrity of the input image. Graph Unlearning, which proposes novel node unlearning methods for efficiently removing sensitive training data from graph neural networks. Safeguarding Graph Neural Networks against Topology Inference Attacks, which presents a comprehensive study on topology privacy risks in graph neural networks and proposes a novel defense framework called Private Graph Reconstruction to protect topology privacy while maintaining model accuracy.