The field of large language models (LLMs) is moving towards more efficient deployment and inference, with a focus on reducing memory footprint and computational cost. Researchers are exploring various techniques such as quantization, activation sparsity, and mixed-precision quantization to achieve this goal. Notably, the development of novel frameworks and algorithms is enabling the native quantization of 4-bit activations, integerized matrix multiplication, and rank-aware sparse inference. These advancements have the potential to significantly improve the efficiency of LLMs, making them more suitable for deployment on edge devices and in resource-constrained environments. Noteworthy papers include BitNet v2, which introduces a novel framework for native 4-bit activation quantization, and R-Sparse, which proposes a training-free activation sparsity approach for advanced LLMs. FineQ is also notable for its software-hardware co-design for low-bit fine-grained mixed-precision quantization of LLMs, achieving higher model accuracy and energy efficiency.