The field of text-to-image models is moving towards addressing the issue of bias and fairness in generated images. Recent research has highlighted the importance of considering subtle and overlapping biases, as well as the need to develop methods that can detect and mitigate these biases without prior knowledge of specific bias types. The use of vision-language models and fairness guides has shown promise in promoting fairer outputs while preserving image quality and diversity. Additionally, the examination of grammatical gender and its influence on visual representation has introduced a new dimension for understanding bias and fairness in multilingual, multimodal systems. The investigation of demographic bias in generated objects has also revealed strong associations between specific demographic groups and visual attributes, reflecting and reinforcing stereotypes. Noteworthy papers include: AutoDebias, which proposes a framework for automated debiasing of text-to-image models, and Beyond Content, which explores the influence of grammatical gender on visual representation.