The field of machine learning is moving towards a greater understanding of the risks and vulnerabilities associated with backdoor attacks. Recent research has focused on developing more sophisticated and stealthy attack methods, including one-to-N backdoor frameworks, weak triggers, and multi-modal prompt tuning. These advances have significant implications for the security of deep learning systems, particularly in safety-sensitive domains such as autonomous driving and robotics. Noteworthy papers in this area include: One-to-N Backdoor Attack in 3D Point Cloud via Spherical Trigger, which establishes a theoretical foundation for one-to-N backdoor attacks in 3D vision. BackWeak, which proposes a simple and efficient backdoor attack paradigm using weak triggers and fine-tuning. The 'Sure' Trap, which introduces a compliance-only backdoor that can be used to analyze the security risks of large language models.