Advances in Parameter-Efficient Fine-Tuning

The field of parameter-efficient fine-tuning is rapidly advancing, with a focus on improving the adaptability and efficiency of large pre-trained models. Recent developments have centered around Low-Rank Adaptation (LoRA) and its variants, which aim to reduce the computational and memory overhead of fine-tuning. These methods have shown promising results in various applications, including natural language processing and computer vision. Notably, innovations such as functional LoRA, HyperAdaLoRA, and HoRA have demonstrated improved performance and efficiency. Additionally, techniques like POEM and TiTok have explored new approaches to test-time adaptation and knowledge transfer. Overall, the field is moving towards more efficient, effective, and robust fine-tuning methods. Noteworthy papers include: FunLoRA, which proposes a novel conditioning mechanism for generative models, and HoRA, which introduces a cross-head low-rank adaptation method.

Sources

Deep Generative Continual Learning using Functional LoRA: FunLoRA

HyperAdaLoRA: Accelerating LoRA Rank Allocation During Training via Hypernetworks without Sacrificing Performance

POEM: Explore Unexplored Reliable Samples to Enhance Test-Time Adaptation

Rethinking Inter-LoRA Orthogonality in Adapter Merging: Insights from Orthogonal Monte Carlo Dropout

Optimizing Fine-Tuning through Advanced Initialization Strategies for Low-Rank Adaptation

HoRA: Cross-Head Low-Rank Adaptation with Joint Hypernetworks

Domain Generalization: A Tale of Two ERMs

DoRAN: Stabilizing Weight-Decomposed Low-Rank Adaptation via Noise Injection and Auxiliary Networks

TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA

High-Rate Mixout: Revisiting Mixout for Robust Domain Generalization

Revisiting Mixout: An Overlooked Path to Robust Finetuning

Built with on top of