The field of Natural Language Processing is moving towards a greater emphasis on fairness and transparency, with a particular focus on mitigating biases in language models. Recent studies have highlighted the importance of evaluating and addressing biases in language models, including gender bias, nationality bias, and social biases. The development of benchmark datasets and evaluation frameworks tailored to specific languages and contexts is crucial for advancing this research area. Noteworthy papers in this regard include the introduction of the Dutch CrowS-Pairs dataset for measuring social biases in Dutch language models, and the Obscured but Not Erased study, which explores nationality bias in large language models via name-based bias benchmarks, showing that small models exhibit more bias and are less accurate than larger counterparts.