The field of large language models is moving towards more efficient and cost-effective solutions. Recent developments focus on reducing the memory footprint and computational demands of these models, making them more suitable for local deployment and edge devices. Innovations in model architecture, such as hybrid approaches and sparse structures, are being explored to improve performance while minimizing costs. Additionally, techniques like quantization, caching, and compression are being developed to optimize model inference. Notable papers in this area include A3D-MoE, which proposes a 3D heterogeneous integration system to enhance memory bandwidth and reduce energy consumption, and SmallThinker, which introduces a family of efficient large language models natively designed for local deployment. Other noteworthy papers include HCAttention, which enables extreme KV cache compression via heterogeneous attention computing, and Falcon-H1, which presents a family of hybrid-head language models redefining efficiency and performance.