The field of artificial intelligence is moving towards developing more robust and generalizable models, with a focus on artificial general intelligence (AGI) and large language models (LLMs). Researchers are exploring new evaluation methods for AGI, such as homeostatic account and coherence-based measures, to assess the true general intelligence of models. Additionally, there is a growing concern about the safety and security of LLMs, with studies investigating vulnerabilities such as cache corruption and subliminal corruption. To address these issues, researchers are proposing new techniques, including tail-optimized caching and corrigibility transformation, to improve the performance and reliability of LLMs. Noteworthy papers in this area include: Tail-Optimized Caching for LLM Inference, which proposes a simple yet effective method to reduce tail latency in LLM inference. Corrigibility Transformation: Constructing Goals That Accept Updates, which introduces a formal definition for corrigibility and a transformation to construct corrigible goals without sacrificing performance. Can Transformer Memory Be Corrupted, which identifies cache integrity as a critical vulnerability in current LLM deployments. Subliminal Corruption: Mechanisms, Thresholds, and Interpretability, which investigates the dynamics of subliminal corruption and its implications for AI safety.