The field of parameter-efficient fine-tuning is rapidly advancing, with a focus on improving the adaptability and efficiency of large pre-trained models. Recent developments have centered around Low-Rank Adaptation (LoRA) and its variants, which aim to reduce the computational and memory overhead of fine-tuning. These methods have shown promising results in various applications, including natural language processing and computer vision. Notably, innovations such as functional LoRA, HyperAdaLoRA, and HoRA have demonstrated improved performance and efficiency. Additionally, techniques like POEM and TiTok have explored new approaches to test-time adaptation and knowledge transfer. Overall, the field is moving towards more efficient, effective, and robust fine-tuning methods. Noteworthy papers include: FunLoRA, which proposes a novel conditioning mechanism for generative models, and HoRA, which introduces a cross-head low-rank adaptation method.