The field of natural language processing is moving towards a more inclusive and diverse evaluation of large language models (LLMs), with a focus on low-resource languages and cultural contexts. Recent studies have highlighted the limitations of current LLMs in understanding and generating text in non-English languages, and have introduced new benchmarks and datasets to address these gaps. These efforts aim to promote more accurate and culturally sensitive representations of diverse languages and cultures. Notable papers in this area include MELAC, which introduced a comprehensive evaluation dataset for Persian language and Iranian culture, and FilBench, which presented a Filipino-centric benchmark for evaluating LLMs. Additionally, MyCulture and MELLA proposed innovative approaches to evaluating LLMs on Malaysian culture and low-resource languages, respectively. TASE introduced a comprehensive benchmark for evaluating LLMs' token-level understanding and structural reasoning across languages.