The field of large language models (LLMs) is moving towards more efficient deployment strategies, focusing on balancing performance and cost. Recent research has explored routing techniques, load balancing, and optimization methods to improve the scalability and affordability of LLMs. Notably, innovative approaches have been proposed to address the challenge of selecting the optimal LLM for specific tasks, reducing inference costs and improving response quality. Some noteworthy papers in this area include: Skylb, which achieves higher throughput and lower latency compared to existing load balancers. RadialRouter, a novel framework for LLM routing that significantly outperforms existing routing methods. Cascadia, a cascade serving framework that co-optimizes system deployment and routing strategy for fast and quality-preserving LLM serving.