The field of large language models (LLMs) is moving towards addressing the concerns of security, reliability, and efficiency in their architectures and applications. Researchers are focusing on developing innovative solutions to mitigate the risks associated with unvalidated trust between processing stages, ensuring the integrity of model outputs, and preventing model weight exfiltration. Notable developments include the introduction of zero-trust architectural principles, fault-aware verification mechanisms, and game-theoretic approaches to prevent dishonest manipulation by service providers. These advancements aim to provide more robust and trustworthy LLMs for various applications. Noteworthy papers include: Unvalidated Trust, which presents a mechanism-centered taxonomy of risk patterns in commercial LLMs and recommends zero-trust architectural principles. Sherlock, which introduces a counterfactual analysis-based approach to selectively verify agentic workflows and reduce latency overhead. Pay for The Second-Best Service, which proposes a game-theoretic mechanism to prevent dishonest manipulation by LLM providers. Keys in the Weights, which introduces a decoder-binding property for Transformer autoencoders to enable latent-based authentication and access control. Janus, which leverages incremental computation for efficient DNS verification. Verifying LLM Inference, which investigates verification frameworks to defend against model weight exfiltration. Fraud-Proof Revenue Division, which explores revenue division mechanisms that inherently disincentivize manipulation on subscription platforms.