The field of human-AI interaction is moving towards a greater emphasis on accountability and transparency. Researchers are exploring ways to ensure that AI systems are aligned with user and societal interests, and that they can be held accountable for their actions. This includes the development of frameworks for accountability, such as conditional engagement and reciprocity deficits, which can help to build trust between humans and AI systems. Another key area of research is context engineering, which involves designing systems that can understand and adapt to the context in which they are being used. This includes the development of ontologies for human behavior, which can help machines to better understand human intentions and actions. Noteworthy papers in this area include:
- A paper proposing a framework for accountability in human-AI relationships, which incorporates design strategies such as distancing and discouraging.
- A paper introducing the concept of reciprocity deficits, which highlights the need for greater transparency and accountability in AI systems.
- A paper presenting an ontology for the interpretation of human behavior, which provides a formal framework for classifying behaviors and intentions.
- A paper exploring the concept of AI personhood, which proposes a pragmatic framework for navigating the diversification of personhood in the context of AI.
- A paper providing a systematic definition and historical context of context engineering, which outlines key design considerations for practice.