The field of multilingual research is moving towards a more culturally grounded approach, with a focus on developing benchmarks and models that are tailored to specific regions and languages. This shift is driven by the need to address the biases and limitations of existing models, which are often trained on Western-centric data and may not perform well in low-resource languages or regional contexts. Recent work has highlighted the importance of culturally adapted benchmarks, such as those for question answering and dialogue systems, which can help to evaluate the performance of models in a more nuanced and context-specific way. Additionally, there is a growing interest in developing models that can effectively incorporate cultural knowledge and reason about complex, nuanced concepts. Noteworthy papers in this area include BharatBBQ, which introduces a multilingual bias benchmark for question answering in the Indian context, and SEADialogues, which presents a multilingual culturally grounded multi-turn dialogue dataset for Southeast Asian languages. Other notable papers include Grounding Multilingual Multimodal LLMs With Cultural Knowledge, which proposes a data-centric approach to grounding multimodal large language models in cultural knowledge, and Entangled in Representations, which investigates the mechanistic interpretation of cultural biases in large language models.