The field of large language models is moving towards more efficient fine-tuning methods, with a focus on parameter-efficient techniques that can adapt to specific tasks without requiring full parameter updates. Recent developments have shown that low-rank adaptation methods, such as LoRA, can be improved upon by incorporating pruning, orthogonal natural gradients, and Riemannian optimization. These advancements have led to significant performance boosts and reduced training costs. Noteworthy papers in this area include DropLoRA, which introduces a pruning-based approach to overcome the limitations of traditional LoRA, and ONG, which combines orthogonal gradient descent with natural gradients to improve convergence. Additionally, Bi-LoRA has been proposed as a sharpness-aware minimization method that can be integrated with LoRA for efficient fine-tuning. These innovative approaches are expected to have a significant impact on the field, enabling more efficient and effective fine-tuning of large language models.