The field of deep learning security is rapidly evolving, with a growing focus on identifying and mitigating potential threats. Recent research has highlighted the vulnerability of deep learning models to backdoor attacks, which can be embedded through various means, including model quantization and expert routing. These attacks can have significant consequences, including compromising model performance and enabling malicious behaviors. To address these risks, researchers are developing innovative defense frameworks and techniques, such as backdoor aggregation and stego attack detection. Notably, some papers have made significant contributions to the field, including the proposal of QuRA, a novel backdoor attack that exploits model quantization, and SASER, a stego attack on open-source large language models. Additionally, BadSwitch, a backdoor framework for Mixture-of-Experts Transformers, has demonstrated high success rates and resilience against defense mechanisms. These advancements underscore the need for continued research into deep learning security and the development of robust countermeasures.