Advances in Parameter-Efficient Fine-Tuning

The field of large language models is moving towards more efficient and effective fine-tuning methods. Recent developments have focused on improving the expressiveness and capacity of low-rank adaptation methods, such as LoRA, while maintaining parameter efficiency. Notable advancements include the introduction of non-linear transformations, structured sparsity regularization, and geometry-aware extensions. These innovations have led to significant improvements in performance across various tasks, including commonsense reasoning, math and code generation, and image classification. Noteworthy papers include:

  • Blockwise Hadamard high-Rank Adaptation, which proposes a blockwise design for low-rank adaptation, unlocking localized rank amplification while preserving the parameter footprint.
  • PrunedLoRA, a framework that leverages structured pruning to obtain highly representative low-rank adapters from an over-parameterized initialization, demonstrating advantages over existing structured pruning methods.

Sources

Blockwise Hadamard high-Rank Adaptation for Parameter-Efficient LLM Fine-Tuning

Enhancing Low-Rank Adaptation with Structured Nonlinear Transformations

Understanding Textual Capability Degradation in Speech LLMs via Parameter Importance Analysis

Differentiable Sparsity via $D$-Gating: Simple and Versatile Structured Penalization

Adapting SAM with Dynamic Similarity Graphs for Few-Shot Parameter-Efficient Small Dense Object Detection: A Case Study of Chickpea Pods in Field Conditions

PrunedLoRA: Robust Gradient-Based structured pruning for Low-rank Adaptation in Fine-tuning

StelLA: Subspace Learning in Low-rank Adaptation using Stiefel Manifold

Built with on top of