The field of federated learning is moving towards more personalized and efficient AI models. Researchers are exploring new methods to address the challenges of data and model heterogeneity in real-world scenarios. One of the key directions is the development of personalized federated learning approaches that allow each client to leverage the knowledge of other clients for further adaptation to individual user preferences. Another important area of research is the design of lightweight and query-efficient federated learning frameworks that can handle large-scale multimodal models and reduce communication costs. These innovations have the potential to enable scalable, decentralized, and user-centric AI systems. Notable papers in this area include:
- A study that proposes a task-similarity-aware model aggregation method and a dimension-invariant module to address data and model heterogeneity.
- A framework that centralizes the large language model on the server and introduces a lightweight module for client-specific adaptation, reducing client-side storage and communication overhead.
- An event-driven online vertical federated learning framework that addresses the challenges of online learning in non-stationary environments.
- A query-efficient federated learning method for black-box discrete prompt learning that maximizes query efficiency when interacting with cloud-based language models.