The field of deep learning is moving towards improving the robustness and security of models, particularly in the face of adversarial attacks. Researchers are exploring innovative methods to enhance model resilience, including influence-guided concolic testing, learning-based testing, and robustness analysis of graph neural networks. Notably, the development of novel attack frameworks, such as the High Impact Attack, is exposing critical vulnerabilities in temporal graph neural networks. Furthermore, advancements in backdoor attacks, like the Distribution-Preserving Backdoor Attack, are highlighting the need for more effective defenses. Noteworthy papers include: Influence-Guided Concolic Testing of Transformer Robustness, which presents an influence-guided concolic tester for Transformer classifiers. Learning-Based Testing for Deep Learning, which integrates Learning-Based Testing with hypothesis and mutation testing to efficiently prioritize adversarial test cases. Leveraging Vulnerabilities in Temporal Graph Neural Networks via Strategic High-Impact Assaults, which introduces a novel restricted black-box attack framework to expose critical vulnerabilities in TGNNs. Stealthy Yet Effective: Distribution-Preserving Backdoor Attacks on Graph Classification, which proposes a clean-label backdoor framework that learns in-distribution triggers via adversarial training guided by anomaly-aware discriminators.