Advances in Low-Rank Adaptation for Efficient Fine-Tuning

The field of low-rank adaptation is moving towards more efficient and effective fine-tuning methods for large language models and vision transformers. Researchers are proposing novel approaches to address challenges such as overparametrization, initialization strategies, and privacy preservation. These methods aim to reduce the number of trainable parameters, improve convergence speed, and maintain accuracy. Notably, techniques like differential privacy, domain adaptation, and approximately orthogonal fine-tuning are being explored to enhance the performance and generalization capability of fine-tuned models.

Some noteworthy papers in this area include: RiemannLoRA, which proposes a unified Riemannian framework for ambiguity-free LoRA optimization, and AirLLM, which develops a hierarchical diffusion policy framework for communication-aware LoRA adaptation. FedASK is also noteworthy as it proposes a novel federated LoRA framework that enables effective updating of both low-rank adapters with robust differential privacy. Additionally, the Approximately Orthogonal Fine-Tuning strategy is a promising approach that aligns the properties of fine-tuned matrices with those of the pre-trained backbone, leading to enhanced generalization capability.

Sources

ConsNoTrainLoRA: Data-driven Weight Initialization of Low-rank Adapters using Constraints

LoRA Is Slower Than You Think

Differentially Private Federated Low Rank Adaptation Beyond Fixed-Matrix

AirLLM: Diffusion Policy-based Adaptive LoRA for Remote Fine-Tuning of LLM over the Air

Effective Fine-Tuning of Vision Transformers with Low-Rank Adaptation for Privacy-Preserving Image Classification

RiemannLoRA: A Unified Riemannian Framework for Ambiguity-Free LoRA Optimization

A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique

Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy

Built with on top of