Advancements in Human-AI Collaboration and Trust

The field of artificial intelligence is moving towards developing more effective human-AI collaboration systems, with a focus on improving trust and synergy between humans and AI. Recent studies have explored the use of personalized AI assistants, multimodal approaches to trust calibration, and principled inquiry frameworks to resolve uncertainty about user intent. These innovations have shown promise in enhancing human-AI collaboration, but also raise important questions about the potential risks and limitations of relying on AI-generated content and crowdsourced moderation. Noteworthy papers in this area include: Inferring trust in recommendation systems from brain, behavioural, and physiological data, which provides a neurally grounded account of calibrating trust in automation. Dialogue as Discovery: Navigating Human Intent Through Principled Inquiry, which proposes a Socratic collaboration paradigm to resolve uncertainty about user intent. Personalized AI Scaffolds Synergistic Multi-Turn Collaboration in Creative Work, which demonstrates the effectiveness of personalized AI assistants in enhancing human creativity and performance.

Sources

How Similar Are Grokipedia and Wikipedia? A Multi-Dimensional Textual and Structural Comparison

Inferring trust in recommendation systems from brain, behavioural, and physiological data

Dialogue as Discovery: Navigating Human Intent Through Principled Inquiry

Personalized AI Scaffolds Synergistic Multi-Turn Collaboration in Creative Work

NeuResonance: Exploring Feedback Experiences for Fostering the Inter-brain Synchronization

AI Credibility Signals Outrank Institutions and Engagement in Shaping News Perception on Social Media

Community Notes are Vulnerable to Rater Bias and Manipulation

Human-AI Collaboration with Misaligned Preferences

Levers of Power in the Field of AI

Revealing AI Reasoning Increases Trust but Crowds Out Unique Human Knowledge

When Empowerment Disempowers

Built with on top of