The field of large language models (LLMs) is moving towards more efficient and cost-effective deployment, with a focus on dynamic selection, test-time scaling, and pruning techniques. Researchers are exploring innovative methods to balance performance and cost, such as cross-attention routing, latency-aware test-time compute, and reasoning-aware compression. These advances have the potential to significantly improve the scalability and usability of LLMs in real-world applications. Noteworthy papers include: One Head, Many Models, which introduces a unified routing framework for dynamic LLM selection, and A1: Asynchronous Test-Time Scaling, which proposes a statistically guaranteed adaptive inference framework for scalable LLM inference. Additionally, papers like Reasoning Models Can be Accurately Pruned Via Chain-of-Thought Reconstruction and EconProver: Towards More Economical Test-Time Scaling for Automated Theorem Proving demonstrate significant improvements in efficiency and performance.