The field of artificial intelligence is moving towards creating more human-like interactions, with a focus on improving the realism and believability of non-player characters in virtual reality environments and embodied AI agents. Researchers are exploring the use of large language models to enhance the interaction capabilities of these agents, including their ability to understand and respond to human emotions and values. The development of new methods for evaluating the performance of these agents, such as the React to This test, is also underway. Additionally, there is a growing interest in understanding how stylistic similarity between humans and AI systems affects user preferences and trust. Overall, these advancements have the potential to lead to more sophisticated and engaging human-AI interactions. Noteworthy papers include: Internal Value Alignment in Large Language Models through Controlled Value Vector Activation, which introduces a method for aligning the internal values of large language models with human values, and Value-Based Large Language Model Agent Simulation for Mutual Evaluation of Trust and Interpersonal Closeness, which demonstrates the importance of value similarity in building trust and close relationships between AI agents.