Advances in AI Alignment and Uncertainty Management

The field of artificial intelligence is moving towards greater emphasis on alignment with human values and management of uncertainty. Researchers are exploring new approaches to address the challenges of interpretive ambiguity in AI systems, including the development of frameworks that mirror legal mechanisms for constraining ambiguity. There is also a growing recognition of the need to move beyond simple quantification of uncertainty towards richer expressions of uncertainty that can capture the complexities of professional decision-making. Furthermore, the integration of human-centric objectives and cognitively faithful decision-making models is becoming increasingly important for improving AI alignment. Notable papers in this area include: The paper on Statutory Construction and Interpretation for Artificial Intelligence, which proposes a computational framework for managing interpretive ambiguity in AI systems. The paper on Beyond Quantification, which argues for participatory refinement processes to shape how different forms of uncertainty are communicated in professional contexts. The paper on Towards Cognitively-Faithful Decision-Making Models, which presents an axiomatic approach to learning decision processes from pairwise comparisons that capture the true cognitive processes of human decision making.

Sources

Statutory Construction and Interpretation for Artificial Intelligence

Card Sorting with Fewer Cards and the Same Mental Models? A Re-examination of an Established Practice

Beyond Quantification: Navigating Uncertainty in Professional AI Systems

On Aligning Prediction Models with Clinical Experiential Learning: A Prostate Cancer Case Study

Towards Cognitively-Faithful Decision-Making Models to Improve AI Alignment

Towards Ontology-Based Descriptions of Conversations with Qualitatively-Defined Concepts

A Priest, a Rabbi, and an Atheist Walk Into an Error Bar: Religious Meditations on Uncertainty Visualization

One Model, Two Minds: A Context-Gated Graph Learner that Recreates Human Biases

Built with on top of