Advances in Foundation Models for Medical Imaging

The field of medical imaging is witnessing significant advancements with the integration of foundation models. These models have demonstrated impressive generalizability and transfer learning capabilities, but their performance can be undermined by biases and spurious correlations. Recent studies have highlighted the need for rigorous fairness evaluations and mitigation strategies to ensure inclusive and generalizable AI. Researchers are exploring techniques such as domain-adaptation, fairness-aware training, and data aggregation to alleviate these issues. Notably, the use of foundation model embeddings has shown promise in facilitating accurate and computationally efficient diagnostic classification. Furthermore, investigations into the statistical properties of network predictions and the effects of normalization have provided valuable insights into model behavior. Noteworthy papers include:

  • Bias and Generalizability of Foundation Models across Datasets in Breast Mammography, which emphasizes the importance of fairness evaluations in foundation models.
  • From Embeddings to Accuracy, which evaluates the utility of foundation model embeddings for radiographic classification.
  • Unintended Bias in 2D+ Image Segmentation and Its Effect on Attention Asymmetry, which proposes strategies to mitigate unintended biases in pretrained models.

Sources

Bias and Generalizability of Foundation Models across Datasets in Breast Mammography

From Embeddings to Accuracy: Comparing Foundation Models for Radiographic Classification

Where You Place the Norm Matters: From Prejudiced to Neutral Initializations

Understanding Nonlinear Implicit Bias via Region Counts in Input Space

Robust learning of halfspaces under log-concave marginals

Unintended Bias in 2D+ Image Segmentation and Its Effect on Attention Asymmetry

Built with on top of