The field of medical image segmentation and reconstruction is experiencing significant developments, driven by the integration of deep learning techniques and innovative architectures. Researchers are exploring new methods to improve the accuracy and efficiency of image segmentation, such as collaborative learning frameworks and adapter-based approaches. These advancements have the potential to enhance the performance of foundation models in medical image segmentation tasks, including camouflaged object detection and shadow detection. Additionally, vision-language foundation models are being investigated for their ability to provide high-level contextual information for undersampled MRI reconstruction. Noteworthy papers in this area include SCALER, which jointly optimizes a segmenter and a learnable SAM for label-deficient concealed object segmentation, and TPG-INR, which employs a target prior to enhance implicit 3D CT reconstruction. MedSAM3 and DEAP-3DSAM are also notable for their contributions to medical image segmentation, introducing text promptable models and decoder-enhanced architectures. Furthermore, SAM3-Adapter has demonstrated state-of-the-art results in multiple downstream tasks, while CrispFormer has shown improved performance in weakly supervised semantic segmentation. PromptCT has also achieved higher-quality reconstructions with lower storage costs in multiple-in-one sparse-view CT reconstruction.