Advances in Human-AI Interaction and Intelligence

The field of human-AI interaction and intelligence is rapidly evolving, with a focus on developing more natural and intimate interactions between humans and large language models (LLMs). Researchers are exploring the factors that contribute to intimacy formation, such as gradual self-disclosure, reciprocity, and naturalness. Additionally, there is a growing interest in understanding the cognitive and metaphysical foundations of LLMs, including their ability to reason, learn, and adapt. Recent studies have also investigated the role of embodiment in conversational agents, the trustworthiness of AI systems, and the potential for LLMs to exhibit introspection and critical thinking. Noteworthy papers in this area include: The Unified Cognitive Consciousness Theory for Language Models, which introduces a new framework for understanding LLMs as unconscious substrates that operate without explicit semantics or goal-directed reasoning. Learning-at-Criticality in Large Language Models for Quantum Field Theory and Beyond, which demonstrates a new reinforcement learning scheme that tunes LLMs to a sharp learning transition, enabling peak generalization from minimal data.

Sources

Can LLMs and humans be friends? Uncovering factors affecting human-AI intimacy formation

The Unified Cognitive Consciousness Theory for Language Models: Anchoring Semantics, Thresholds of Activation, and Emergent Reasoning

Natural, Artificial, and Human Intelligences

To Embody or Not: The Effect Of Embodiment On User Perception Of LLM-based Conversational Agents

A Trustworthiness-based Metaphysics of Artificial Intelligence Systems

Learning-at-Criticality in Large Language Models for Quantum Field Theory and Beyond

A Statistical Physics of Language Model Reasoning

Does It Make Sense to Speak of Introspection in Large Language Models?

Trustworthiness Preservation by Copies of Machine Learning Systems

Built with on top of