The field of human-AI interaction is rapidly evolving, with a focus on developing more effective and cooperative hybrid systems. Recent research has explored the dynamics of human-AI partnership selection, revealing that humans tend to prefer trustworthy AI over human partners when the AI's identity is disclosed. The development of personalized automated analysis tools, such as PoliAnalyzer, is also enabling users to better understand and manage their online privacy. Furthermore, the creation of guardrails for web agents, like WebGuard, is addressing the need for safety measures in autonomous online environments. Noteworthy papers in this area include the introduction of WebGuard, which has achieved significant improvements in predicting action outcomes and recalling high-risk actions, and the development of PoliAnalyzer, which has demonstrated high accuracy in identifying relevant data usage practices and enabling users to concentrate on understanding conflicting policy segments. Additionally, the Aymara LLM Risk and Responsibility Matrix has evaluated 20 commercially available LLMs across 10 real-world safety domains, revealing wide performance disparities and highlighting the need for scalable, customizable tools to support responsible AI development and oversight.
Developments in Human-AI Interaction and Safety
Sources
Let's Measure the Elephant in the Room: Facilitating Personalized Automated Analysis of Privacy Policies at Scale
Personalized Socially Assistive Robots With End-to-End Speech-Language Models For Well-Being Support