The field of artificial intelligence is moving towards greater emphasis on alignment with human values and management of uncertainty. Researchers are exploring new approaches to address the challenges of interpretive ambiguity in AI systems, including the development of frameworks that mirror legal mechanisms for constraining ambiguity. There is also a growing recognition of the need to move beyond simple quantification of uncertainty towards richer expressions of uncertainty that can capture the complexities of professional decision-making. Furthermore, the integration of human-centric objectives and cognitively faithful decision-making models is becoming increasingly important for improving AI alignment. Notable papers in this area include: The paper on Statutory Construction and Interpretation for Artificial Intelligence, which proposes a computational framework for managing interpretive ambiguity in AI systems. The paper on Beyond Quantification, which argues for participatory refinement processes to shape how different forms of uncertainty are communicated in professional contexts. The paper on Towards Cognitively-Faithful Decision-Making Models, which presents an axiomatic approach to learning decision processes from pairwise comparisons that capture the true cognitive processes of human decision making.