The field of machine learning is moving towards more efficient and effective methods for fine-tuning large models. Recent developments have focused on Low-Rank Adaptation (LoRA) techniques, which enable the personalization of visual concepts and adaptation of pre-trained models to new tasks. These methods have shown significant improvements in performance and computational efficiency. Additionally, there is a growing interest in AutoML approaches that go beyond traditional hyperparameter optimization, incorporating techniques such as fine-tuning, ensembling, and adaptation. Noteworthy papers in this area include: LoRAtorio, which presents a novel train-free framework for multi-LoRA composition that leverages intrinsic model behaviour, achieving state-of-the-art performance. LangVision-LoRA-NAS, which introduces a framework that integrates Neural Architecture Search with LoRA to optimize Vision Language Models for variable-rank adaptation, demonstrating notable improvement in model performance while reducing fine-tuning costs.