The field of large language models (LLMs) is rapidly evolving, with a focus on improving their performance, rationality, and moral reasoning. Recent studies have highlighted the importance of evaluating LLMs' ability to prioritize moral outcomes, think rationally, and make decisions that align with human values. The development of new benchmarks and evaluation metrics has enabled researchers to assess LLMs' capabilities in these areas, revealing both strengths and weaknesses. Notably, LLMs have been found to exhibit consistent value biases and susceptibility to adversarial manipulation, emphasizing the need for explainability and cultural awareness in their design. Furthermore, research has shown that LLMs can be used as tools for scaffolding disagreement and promoting more productive group discussions. Overall, the field is moving towards creating more transparent, aligned, and symbiotic LLMs that can effectively support human decision-making and collaboration. Noteworthy papers include: The Morality of Probability, which investigates how LLMs prioritize moral outcomes and reveals strikingly consistent value biases. MORABLES, which presents a benchmark for assessing abstract moral reasoning in LLMs and highlights their susceptibility to adversarial manipulation. Rationality Check, which proposes a benchmark for evaluating the omnibus rationality of LLMs and provides a foundational tool for developers and users.