The field of large language models is moving towards more efficient and effective fine-tuning methods. Recent developments have focused on improving the expressiveness and capacity of low-rank adaptation methods, such as LoRA, while maintaining parameter efficiency. Notable advancements include the introduction of non-linear transformations, structured sparsity regularization, and geometry-aware extensions. These innovations have led to significant improvements in performance across various tasks, including commonsense reasoning, math and code generation, and image classification. Noteworthy papers include:
- Blockwise Hadamard high-Rank Adaptation, which proposes a blockwise design for low-rank adaptation, unlocking localized rank amplification while preserving the parameter footprint.
- PrunedLoRA, a framework that leverages structured pruning to obtain highly representative low-rank adapters from an over-parameterized initialization, demonstrating advantages over existing structured pruning methods.