The field of artificial intelligence is moving towards developing more secure and transparent systems. Researchers are focusing on creating methods to detect and prevent backdoor attacks in large language models, as well as designing more efficient and scalable blockchain systems. Additionally, there is a growing interest in developing verifiable and explainable AI models, with a focus on ensuring the integrity and reliability of AI decision-making processes. Noteworthy papers in this area include PoTS, which introduces a verification protocol for detecting backdoor attacks in large language models, and NEMO, which presents a new blockchain execution engine that improves performance under high contention. Other notable works include Nondeterminism-Aware Optimistic Verification for floating-point neural networks, Verifiable Fine-Tuning for LLMs, and TRUST, a decentralized framework for auditing large language model reasoning.