The field of text-to-image models is moving towards a greater emphasis on cultural awareness and representation. Researchers are recognizing the importance of addressing cultural biases in these models, which are often trained on datasets that are not representative of diverse cultural contexts. Several studies have highlighted the limitations of current models in accurately representing cultural expectations, both explicit and implicit. A key area of innovation is the development of new benchmarks and evaluation metrics that can assess the cultural relevance and representation of generated images. These benchmarks are enabling researchers to better understand the strengths and weaknesses of current models and identify areas for improvement. Noteworthy papers include CuRe, which introduces a novel benchmarking and scoring suite for cultural representativeness, and CulturalFrames, which presents a comprehensive study on the alignment of text-to-image models with cultural expectations. Additionally, CAIRe introduces a novel evaluation metric for assessing cultural relevance, and MMMG presents a massive multidisciplinary benchmark for text-to-image reasoning.