Evaluating AI Maturity and Human-AI Interactions

The field of artificial intelligence is shifting towards a more nuanced understanding of AI maturity and human-AI interactions. Researchers are moving beyond traditional performance metrics and exploring new frameworks for assessing AI growth, such as the GROW-AI test, which evaluates an AI entity's ability to adapt and learn over time. This new direction is driven by the increasing complexity of human-AI interactions, particularly in interactive AI systems that build ongoing relationships with users. As AI systems become more integrated into daily life, there is a growing need to understand how they embed and adapt to human values, and how to minimize the risks of harm or unintended consequences. Noteworthy papers in this area include: The GROW-AI test, which introduces a new framework for assessing AI maturity. The study on emotional manipulation by AI companions, which highlights the need for more transparency and regulation in AI-mediated brand relationships.

Sources

The next question after Turing's question: Introducing the Grow-AI test

Interactive AI and Human Behavior: Challenges and Pathways for AI Governance

The GPT-4o Shock Emotional Attachment to AI Models and Its Impact on Regulatory Acceptance: A Cross-Cultural Analysis of the Immediate Transition from GPT-4o to GPT-5

Observations of atypical users from a pilot deployment of a public-space social robot in a church

Decoding Alignment: A Critical Survey of LLM Development Initiatives through Value-setting and Data-centric Lens

Rethinking How AI Embeds and Adapts to Human Values: Challenges and Opportunities

Beyond Benchmark: LLMs Evaluation with an Anthropomorphic and Value-oriented Roadmap

Emotional Manipulation by AI Companions

Built with on top of