The field of natural language processing and computer vision is moving towards more efficient and scalable models. Researchers are focusing on developing innovative compression techniques to reduce the computational cost and memory footprint of large language models and vision-language models. This is driven by the need for real-time applicability and scalability of these models. The general direction of the field is towards leveraging singular value decomposition (SVD) and other low-rank approximation methods to achieve significant reductions in memory usage and computational cost. Noteworthy papers in this area include: QSVD, which proposes an efficient low-rank approximation for unified query-key-value weight compression in low-precision vision-language models, achieving a significant reduction in both memory usage and computational cost. CPSVD, which enhances large language model compression via column-preserving singular value decomposition, achieving lower perplexity and higher accuracy on zero-shot tasks. ARA, which proposes an adaptive rank allocation method for efficient large language model SVD compression, achieving state-of-the-art performance on the LLaMA2-7B model with a 80% compression ratio.