The field of federated learning and personalized models is witnessing significant advancements, with a focus on improving model robustness, accuracy, and efficiency. Researchers are exploring innovative approaches to address the challenges associated with deploying models on edge devices, such as vulnerability to adversarial attacks and high communication costs. One notable direction is the development of collaborative fine-tuning frameworks that can deliver customized defense models for each client, balancing robustness and accuracy effectively. Another area of interest is the use of low-rank adaptation methods, which can reduce computational and communication overhead while maintaining model performance. Additionally, there is a growing emphasis on protecting user privacy in recommender systems and personalized search, with techniques such as federated recommendation and regularized low-rank parameter updates showing promise. Noteworthy papers in this area include Sylva, which proposes a personalized collaborative adversarial training framework, and Ravan, which introduces an adaptive multi-head low-rank adaptation method for federated fine-tuning of large language models. FedShield-LLM is also notable for its secure and scalable federated fine-tuned large language model, while RETENTION and LEANN offer innovative solutions for accelerating tree-based models and reducing storage overhead for embedding-based search, respectively. Beyond Personalization and Improving Personalized Search with Regularized Low-Rank Parameter Updates are also worth mentioning for their contributions to federated recommendation and personalized vision-language retrieval.