The field of computer vision is moving towards more efficient and accurate segmentation models, with a focus on reducing the need for large amounts of labeled data. Recent developments have seen the introduction of semi-supervised and weakly-supervised approaches, which leverage large corpora of unlabeled images and coarse-grained annotations to improve model performance. Additionally, there is a growing interest in dynamic and adaptive models that can adjust to different image complexities and granularities. These advancements have the potential to greatly improve the accuracy and scalability of segmentation models, particularly in domains where high-quality annotations are scarce. Notable papers in this area include: CORA, which introduces a semi-supervised reasoning segmentation framework that achieves state-of-the-art results with minimal supervision. Grc-ViT, which proposes a dynamic coarse-to-fine framework that adaptively adjusts visual granularity based on image complexity, enhancing fine-grained discrimination while achieving a superior trade-off between accuracy and computational efficiency. Grc-SAM, which presents a coarse-to-fine framework that integrates multi-granularity attention, enabling prompt-free segmentation with high accuracy and scalability. BoxPromptIML, which effectively balances annotation cost and localization performance using a coarse region annotation strategy and knowledge distillation. ReSAM, which adapts SAM to remote sensing images using a self-prompting, point-supervised framework that progressively enhances segmentation quality and domain robustness.