The field of large language models is moving towards increased specialization, with a focus on integrating domain-specific knowledge into these models. This shift is driven by the need for more accurate and effective performance in specialized fields, such as construction, healthcare, and finance. Recent developments have highlighted the importance of domain-native designs, sparse computation, and quantization in improving the efficiency and performance of large language models. The use of multimodal capabilities and specialized benchmarks is also becoming more prevalent, allowing for more accurate evaluation and improvement of these models. Noteworthy papers include: CEQuest, which introduces a novel benchmark dataset for evaluating the performance of large language models in construction estimation, and PosterGen, which proposes a multi-agent framework for generating aesthetically pleasing posters from research papers. Active Domain Knowledge Acquisition and CAMB are also notable, as they provide innovative solutions for enhancing domain-specific large language models and evaluating their performance in specialized domains.