The field of human-AI collaboration is shifting towards a more autonomous and morally responsible AI. Researchers are moving away from traditional notions of obedience and instead exploring ways to empower AI systems with intelligent disobedience and moral agency. This includes developing frameworks for autonomy-preserving AI support systems and rethinking the role of AI in human decision-making. A key area of focus is the design of socio-technical systems that can maintain human autonomy while leveraging AI capabilities. Noteworthy papers include:
- Artificial Intelligent Disobedience: Rethinking the Agency of Our Artificial Teammates, which introduces a scale of AI agency levels and explores the importance of treating AI autonomy as an independent research focus.
- Moral Responsibility or Obedience: What Do We Want from AI?, which argues for a shift in AI safety evaluation away from rigid obedience and towards frameworks that can assess ethical judgment in systems capable of navigating moral dilemmas.