The field of federated learning is moving towards more efficient and personalized fine-tuning methods, particularly in heterogeneous data scenarios. Researchers are exploring novel architectures and optimization techniques to improve the generalization ability of global models and the personalized adaptation of local models. Notable developments include the use of low-rank adaptation (LoRA) matrices, dropout techniques, and hierarchical federated learning architectures. These advancements have significant implications for various applications, including natural language processing, healthcare, and smart agricultural production systems. Noteworthy papers include: FedLoRA-Optimizer, which proposes a fine-grained federated LoRA tuning method to improve global and local performance. FedLoDrop, which introduces a new framework that applies dropout to the rows and columns of the trainable matrix in Federated LoRA to enhance generalization ability. Hierarchical Federated Learning for Crop Yield Prediction, which presents a novel hierarchical federated learning architecture for smart agricultural production systems. Personalized Federated Fine-Tuning of Vision Foundation Models for Healthcare, which proposes a new personalized federated fine-tuning method that learns orthogonal LoRA adapters. FedHFT, which presents an efficient and personalized federated fine-tuning framework to address challenges in fine-tuning pre-trained large language models.