The field of medical imaging diagnostics is moving towards developing more fair and robust models. Researchers are exploring ways to mitigate bias in deep learning algorithms used for diagnosis, particularly in tasks such as Alzheimer's disease classification from MRI scans. Another area of focus is improving the robustness of models to variations in input image characteristics, such as artifacts or modality differences. This is being achieved through the development of novel training paradigms, such as surrogate supervision, which decouples the input domain from the supervision domain. Additionally, there is a growing interest in self-supervised learning approaches, which can learn robust spatial features and generalize across datasets and tasks. Notable papers in this area include: Structure Matters, which proposes a framework for learning brain graph representations with structural semantic preservation, and SSL-AD, which adapts temporal self-supervised learning approaches for 3D brain MRI analysis and demonstrates adaptability and generalizability across tasks and datasets.
Fairness and Robustness in Medical Imaging Diagnostics
Sources
Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer's Disease Classification
Structure Matters: Brain Graph Augmentation via Learnable Edge Masking for Data-efficient Psychiatric Diagnosis