The field of computer vision and machine learning is witnessing significant developments in cross-domain adaptation and unsupervised learning. Researchers are exploring innovative approaches to address the challenges of domain shift and limited labeled data. One notable direction is the use of uncertainty-guided selective adaptation, which enables reliable transfer of models across different instruments and acquisition settings. Another promising area is self-supervised visual prompting, which has shown robust zero-shot transfer and improved resilience to domain variations. Furthermore, unsupervised learning frameworks are being applied to real-world problems such as bridge damage detection, demonstrating the potential for sustainable and efficient solutions. Noteworthy papers include: Uncertainty-Guided Selective Adaptation Enables Cross-Platform Predictive Fluorescence Microscopy, which introduces a self-configuring framework for label-free adaptation in microscopy. Self-Supervised Visual Prompting for Cross-Domain Road Damage Detection, which proposes a framework that visually probes target domains without labels and achieves robust zero-shot transfer. Voltage-Based Unsupervised Learning Framework for Bridge Damage Detection, which utilizes piezoelectric energy harvesters for dual functionality in structural health monitoring and achieves improved damage detection accuracy while reducing energy consumption.