The field of large language models (LLMs) is moving towards a greater emphasis on cultural understanding and adaptability. Researchers are developing new frameworks and benchmarks to evaluate and improve the cultural competence of LLMs, recognizing the importance of trustworthy and culturally aligned applications in diverse cultural environments. One notable direction is the development of dimensional schemas for cultural knowledge classification, which can guide the automated construction of culture-specific knowledge bases and evaluation datasets. Another significant area of research is the investigation of cross-cultural transfer of commonsense reasoning, which has shown promising results in improving performance in low-resource cultural settings. Furthermore, the creation of multimodal and multilingual benchmarks is helping to advance culturally aware language technologies. Noteworthy papers include: CultureScope, which proposes a comprehensive evaluation framework for assessing cultural understanding in LLMs. NormGenesis, which presents a multicultural framework for generating and annotating socially grounded dialogues. Cross-Cultural Transfer of Commonsense Reasoning in LLMs, which demonstrates the potential for efficient cross-cultural alignment. DRISHTIKON, which introduces a multimodal multilingual benchmark for testing language models' understanding of Indian culture.