Human-AI Collaboration and Trust

The field of human-AI interaction is moving towards a more nuanced understanding of trust and collaboration between humans and artificial intelligence systems. Researchers are developing innovative methods to measure and evaluate trust in AI systems, as well as creating frameworks for more effective human-AI collaboration. One notable direction is the development of robust measurement instruments that can accurately assess trust attitudes towards AI systems from a human-centered perspective. Another significant area of research is the creation of frameworks that can simulate human behavior and traits, enabling robots to better understand and collaborate with humans over time. Noteworthy papers include:

  • A paper that describes the development and validation of a trust measurement instrument for human-AI interaction, which has been shown to be empirically reliable and valid.
  • A paper that proposes a framework for evaluating AI systems using situational judgment tests, which can probe domain-specific competencies and integrate industrial-organizational and personality psychology.
  • A paper that introduces a novel framework for continual open-ended human-robot assistance, enabling the study of long-term human-robot collaboration in different collaborative tasks across various time-scales.

Sources

Human and AI Trust: Trust Attitude Measurement Instrument

Measure what Matters: Psychometric Evaluation of AI with Situational Judgment Tests

COOPERA: Continual Open-Ended Human-Robot Assistance

Built with on top of