The field of large language models is moving towards more efficient and effective fine-tuning methods. Recent developments have focused on improving the expressiveness and generalization ability of low-rank adaptation methods, such as LoRA. New methods have been proposed to address the limitations of LoRA, including the use of Khatri-Rao products and Bayesian hybrid approaches. These innovations have led to significant performance gains and improved adaptability in dynamic scenarios. Notable papers include: KRAdapter, which leverages the Khatri-Rao product to produce weight updates with high effective rank, and EFlat-LoRA, which seeks flat minima for LoRA to improve generalization. MoKA, a mixture of Kronecker adapters, has also shown promising results in instruction-tuning and commonsense reasoning tasks. Additionally, Cross-LoRA, a data-free LoRA transfer framework, has enabled the transfer of LoRA modules between heterogeneous base models without requiring additional training data.