The field of federated learning and multimodal systems is moving towards more flexible and robust frameworks that can handle arbitrary data alignment, unlabeled data, and multi-party collaboration. Researchers are exploring new approaches to deal with missing modalities, incomplete data, and semantic inconsistencies across views. Notable innovations include the development of unified frameworks for vertical federated learning, multimodal foundation models, and online federated learning with modality missing. These advancements have the potential to improve the performance and efficiency of machine learning models in real-world applications, such as embodied AI, IoT systems, and brain tumor segmentation. Some particularly noteworthy papers include:
- Deep Latent Variable Model based Vertical Federated Learning, which proposes a unified framework for handling arbitrary alignment and labeling scenarios.
- Multimodal Online Federated Learning with Modality Missing, which introduces a novel framework for dynamic and decentralized multimodal learning in IoT environments.
- SPAR: Self-supervised Placement-Aware Representation Learning, which advances self-supervised model pretraining from IoT signals by explicitly learning dependencies between measurements and geometric observer layouts.