The field of domain generalization and adaptation is witnessing significant advancements, with a growing focus on leveraging source domain-specific characteristics and developing innovative methods to improve model generalization capability. Researchers are exploring new approaches to address the challenges of domain shifts, heterogeneity, and attribute bias, resulting in improved performance and robustness in various applications. Notably, the development of frameworks that integrate multiple techniques, such as federated learning, domain adaptation, and contrastive learning, is becoming increasingly popular. These advancements have the potential to enable more effective and efficient learning in complex and dynamic environments. Noteworthy papers include: FedSDAF, which proposes a novel framework for federated domain generalization that exploits source domain-aware features. Componential Prompt-Knowledge Alignment, which introduces a prompt-based method for domain incremental learning that aligns component-wise knowledge to improve learning and inference capacity. VaCDA, which presents a variational contrastive alignment-based approach for scalable human activity recognition. FedDDL, which develops a federated deconfounding and debiasing learning method to address attribute bias in federated learning. PAD, which proposes a problem-aware diffusion model for cross-problem generalization in network optimization. CUDA, which introduces a concept-based unsupervised domain adaptation framework to improve the robustness of concept bottleneck models.