The field of medical image analysis is rapidly evolving, with a focus on developing innovative methods for image segmentation, classification, and analysis. Recent research has explored the use of coarse-to-fine learning, symmetry-driven spatial-frequency feature fusion, and contrastive cross-bag augmentation to improve the accuracy and robustness of medical image segmentation models. Additionally, there is a growing interest in leveraging graph convolutional networks, deformable attention mechanisms, and multi-instance learning to capture complex spatial structures and relationships in medical images. These advancements have the potential to significantly improve the diagnosis and treatment of various diseases, including cancer and age-related macular degeneration. Noteworthy papers include: RefineSeg, which proposes a novel coarse-to-fine segmentation framework that relies entirely on coarse-level annotations, and SSFMamba, which employs a complementary dual-branch architecture to extract features from both spatial and frequency domains. AHDMIL is also notable, as it enables fast and accurate whole-slide image classification through asymmetric hierarchical distillation multi-instance learning.