Debiasing and Robustness in Deep Learning

The field of deep learning is moving towards addressing the issues of bias and robustness in models. Researchers are exploring new methods to mitigate biases and improve model generalization, particularly in the context of out-of-distribution data. One of the key directions is the development of techniques that can identify and suppress spurious correlations, which can lead to biased models. Another important area of research is the study of attribute imbalance in vision datasets and its impact on model performance. Overall, the field is shifting towards a more nuanced understanding of the complex interactions between data, models, and biases.

Noteworthy papers in this area include:

  • Iterative Multilingual Spectral Attribute Erasure, which proposes a novel method for debiasing multilingual representations.
  • Evidential Alignment, which introduces a framework for mitigating spurious correlations using uncertainty quantification.
  • SCISSOR, which presents a Siamese network-based approach for mitigating semantic bias in deep learning models.

Sources

Iterative Multilingual Spectral Attribute Erasure

Improving Group Robustness on Spurious Correlation via Evidential Alignment

DISCO: Mitigating Bias in Deep Learning with Conditional Distance Correlation

Compositional Attribute Imbalance in Vision Datasets

SCISSOR: Mitigating Semantic Bias through Cluster-Aware Siamese Networks for Robust Classification

Built with on top of