The field of low-rank adaptation is moving towards more efficient and effective fine-tuning methods for large language models and vision transformers. Researchers are proposing novel approaches to address challenges such as overparametrization, initialization strategies, and privacy preservation. These methods aim to reduce the number of trainable parameters, improve convergence speed, and maintain accuracy. Notably, techniques like differential privacy, domain adaptation, and approximately orthogonal fine-tuning are being explored to enhance the performance and generalization capability of fine-tuned models.
Some noteworthy papers in this area include: RiemannLoRA, which proposes a unified Riemannian framework for ambiguity-free LoRA optimization, and AirLLM, which develops a hierarchical diffusion policy framework for communication-aware LoRA adaptation. FedASK is also noteworthy as it proposes a novel federated LoRA framework that enables effective updating of both low-rank adapters with robust differential privacy. Additionally, the Approximately Orthogonal Fine-Tuning strategy is a promising approach that aligns the properties of fine-tuned matrices with those of the pre-trained backbone, leading to enhanced generalization capability.