Debiasing and Fairness in Text-to-Image Models

The field of text-to-image models is moving towards addressing the issue of bias and fairness in generated images. Recent research has highlighted the importance of considering subtle and overlapping biases, as well as the need to develop methods that can detect and mitigate these biases without prior knowledge of specific bias types. The use of vision-language models and fairness guides has shown promise in promoting fairer outputs while preserving image quality and diversity. Additionally, the examination of grammatical gender and its influence on visual representation has introduced a new dimension for understanding bias and fairness in multilingual, multimodal systems. The investigation of demographic bias in generated objects has also revealed strong associations between specific demographic groups and visual attributes, reflecting and reinforcing stereotypes. Noteworthy papers include: AutoDebias, which proposes a framework for automated debiasing of text-to-image models, and Beyond Content, which explores the influence of grammatical gender on visual representation.

Sources

AutoDebias: Automated Framework for Debiasing Text-to-Image Models

Documenting Patterns of Exoticism of Marginalized Populations within Text-to-Image Generators

Beyond Content: How Grammatical Gender Shapes Visual Representation in Text-to-Image Models

Investigating Gender Bias in LLM-Generated Stories via Psychological Stereotypes

When Cars Have Stereotypes: Auditing Demographic Bias in Objects from Text-to-Image Models

A Methodological Framework and Questionnaire for Investigating Perceived Algorithmic Fairness

Built with on top of