Advancements in Human-Like Interactions with Artificial Intelligence

The field of artificial intelligence is moving towards creating more human-like interactions, with a focus on improving the realism and believability of non-player characters in virtual reality environments and embodied AI agents. Researchers are exploring the use of large language models to enhance the interaction capabilities of these agents, including their ability to understand and respond to human emotions and values. The development of new methods for evaluating the performance of these agents, such as the React to This test, is also underway. Additionally, there is a growing interest in understanding how stylistic similarity between humans and AI systems affects user preferences and trust. Overall, these advancements have the potential to lead to more sophisticated and engaging human-AI interactions. Noteworthy papers include: Internal Value Alignment in Large Language Models through Controlled Value Vector Activation, which introduces a method for aligning the internal values of large language models with human values, and Value-Based Large Language Model Agent Simulation for Mutual Evaluation of Trust and Interpersonal Closeness, which demonstrates the importance of value similarity in building trust and close relationships between AI agents.

Sources

An Empirical Evaluation of AI-Powered Non-Player Characters' Perceived Realism and Performance in Virtual Reality Environments

React to This (RTT): A Nonverbal Turing Test for Embodied AI

How Stylistic Similarity Shapes Preferences in Dialogue Dataset with User and Third Party Evaluations

Internal Value Alignment in Large Language Models through Controlled Value Vector Activation

Subjective Evaluation Profile Analysis of Science Fiction Short Stories and its Critical-Theoretical Significance

Value-Based Large Language Model Agent Simulation for Mutual Evaluation of Trust and Interpersonal Closeness

Built with on top of