The field of medical image analysis is witnessing significant advancements, driven by innovative approaches to improve fairness, accuracy, and robustness in image segmentation and analysis. Researchers are exploring new architectures and techniques to address challenges such as bias in skin lesion classification, limited receptive fields in convolutional neural networks, and the need for more efficient and effective models. Notably, hybrid models combining convolutional neural networks and transformers are showing promising results in capturing global and local features, refining outputs, and eliminating redundant data. These developments have the potential to enhance computer-aided diagnosis and promote more equitable healthcare. Noteworthy papers include:
- MSA2-Net, which utilizes a self-adaptive convolution module to extract multi-scale information, achieving exceptional performance on various datasets.
- MedLiteNet, a lightweight hybrid model that achieves high precision through hierarchical feature extraction and multi-scale context aggregation.
- LGBP-OrgaNet, which introduces a learnable Gaussian band pass fusion of CNN and transformer features for robust organoid segmentation and tracking.
- Heatmap Guided Query Transformers, a hybrid CNN transformer detector that combines local feature extraction with global contextual reasoning for robust astrocyte detection.