Advances in Medical Image Segmentation

The field of medical image segmentation is moving towards more effective and efficient strategies for segmenting small objects and multimodal images. Self-supervised models, such as masked autoencoders, have shown promise in capturing global context information and improving segmentation performance. Meanwhile, innovative approaches like the Mamba model and hypergraph dynamic adapters are being explored for their potential to capture long-range dependencies and fuse complementary information from multiple modalities. Notably, pre-training on large datasets and fine-tuning on smaller datasets is becoming increasingly important for adapting models to varying clinical tasks and datasets. Some papers that stand out in this regard include: the proposal of a Mamba-based feature extraction and adaptive multilevel feature fusion method for 3D tumor segmentation, which achieved competitive performance compared to state-of-the-art methods. Additionally, the introduction of a multimodal masked autoencoder pre-training strategy, BM-MAE, which allows for seamless adaptation to any combination of available modalities and outperforms baselines that require separate pre-training for each modality subset.

Sources

Masked strategies for images with small objects

Mamba Based Feature Extraction And Adaptive Multilevel Feature Fusion For 3D Tumor Segmentation From Multi-modal Medical Image

Automated segmenta-on of pediatric neuroblastoma on multi-modal MRI: Results of the SPPIN challenge at MICCAI 2023

Multimodal Masked Autoencoder Pre-training for 3D MRI-Based Brain Tumor Analysis with Missing Modalities

Brain Foundation Models with Hypergraph Dynamic Adapter for Brain Disease Analysis

Built with on top of