The field of large language models is moving towards more efficient adaptation methods, focusing on reducing the number of trainable parameters and computational resources required for fine-tuning. Recent developments have introduced innovative parameter-efficient fine-tuning methods, such as those utilizing low-rank updates and tensor-based adaptations, which have shown to match or nearly match the performance of full fine-tuning while using significantly fewer parameters. Notable papers in this area include HyperAdapt, which reduces the number of trainable parameters by applying row- and column-wise scaling, and CR-Net, which implements a dual-path architecture to efficiently reconstruct layer activations. Additionally, TensLoRA provides a unified framework for tensor-based low-rank adaptations, and OPLoRA proposes a memory-efficient optimizer that closes the gap between full training and LoRA fine-tuning. These advancements have the potential to significantly impact the field by enabling more efficient and effective adaptation of large language models to specialized applications.