The field of large language models (LLMs) is moving towards more efficient and optimized models, with a focus on reducing computational and memory costs. Recent developments have shown that pruning, quantization, and other compression techniques can be effective in achieving this goal, while also preserving model performance. One notable trend is the use of profiling-guided approaches, which take into account the architectural and runtime heterogeneity of the models to design optimized compression strategies. Another area of research is the development of new pruning methods that can effectively reduce the size of LLMs while maintaining their accuracy. Noteworthy papers in this area include Pruning Weights but Not Truth, which proposes a novel pruning method that preserves critical features for lie detection, and Set Block Decoding, which introduces a new paradigm for accelerating language model inference. Additionally, papers such as ProfilingAgent and COMPACT have demonstrated the effectiveness of profiling-guided and common-token-optimized pruning methods, respectively, in achieving state-of-the-art results in LLM optimization.