The field of medical image analysis is witnessing significant advancements with the integration of semi-supervised learning (SSL) techniques. SSL has emerged as a powerful paradigm for learning effective representations from limited labeled data, which is a common challenge in medical imaging applications. Recent developments have focused on improving the scalability and generalizability of SSL methods, with a particular emphasis on reducing the need for extensive manual annotation. Notably, innovative approaches have been proposed to address the issue of shortcut learning and to improve representation transfer across tasks and domains. These advancements have the potential to significantly impact clinical applications, such as treatment planning and diagnosis, by enabling more accurate and efficient image analysis. Noteworthy papers in this area include: Boosting Active Learning with Knowledge Transfer, which proposes a novel method using knowledge transfer to boost uncertainty estimation in active learning. DiSSECT: Structuring Transfer-Ready Medical Image Representations through Discrete Self-Supervision, which introduces a framework that integrates multi-scale vector quantization into the SSL pipeline to impose a discrete representational bottleneck. nnFilterMatch: A Unified Semi-Supervised Learning Framework with Uncertainty-Aware Pseudo-Label Filtering for Efficient Medical Segmentation, which presents a novel, annotation-efficient, and self-adaptive deep segmentation framework that integrates SSL with entropy-based pseudo-label filtering.