Advancements in Low-Rank Adaptation and AutoML

The field of machine learning is moving towards more efficient and effective methods for fine-tuning large models. Recent developments have focused on Low-Rank Adaptation (LoRA) techniques, which enable the personalization of visual concepts and adaptation of pre-trained models to new tasks. These methods have shown significant improvements in performance and computational efficiency. Additionally, there is a growing interest in AutoML approaches that go beyond traditional hyperparameter optimization, incorporating techniques such as fine-tuning, ensembling, and adaptation. Noteworthy papers in this area include: LoRAtorio, which presents a novel train-free framework for multi-LoRA composition that leverages intrinsic model behaviour, achieving state-of-the-art performance. LangVision-LoRA-NAS, which introduces a framework that integrates Neural Architecture Search with LoRA to optimize Vision Language Models for variable-rank adaptation, demonstrating notable improvement in model performance while reducing fine-tuning costs.

Sources

Learn to optimize for automatic proton PBS treatment planning for H&N cancers

LoRAtorio: An intrinsic approach to LoRA Skill Composition

Efficient Modular Learning through Naive LoRA Summation: Leveraging Orthogonality in High-Dimensional Models

LangVision-LoRA-NAS: Neural Architecture Search for Variable LoRA Rank in Vision Language Models

In-Context Decision Making for Optimizing Complex AutoML Pipelines

Amortized Bayesian Meta-Learning for Low-Rank Adaptation of Large Language Models

Built with on top of