The field of large language models (LLMs) is moving towards a greater emphasis on value alignment, with a focus on evaluating and improving the ability of LLMs to make decisions that are consistent with human values. Recent research has highlighted the importance of considering the cultural and national context in which LLMs are deployed, and has developed new benchmarks and evaluation frameworks to assess the alignment of LLMs with diverse values. Notable papers in this area include CLASH, which introduces a dataset for evaluating LLMs on high-stakes dilemmas, and NaVAB, which provides a comprehensive benchmark for assessing the alignment of LLMs with national values. ELAB is also noteworthy for its extensive evaluation framework for aligning Persian LLMs with critical ethical dimensions. These developments are helping to advance the field of LLMs and improve their potential for safe and beneficial deployment.