Advances in Human-Agent Collaboration and Adaptive Strategies

The field of human-agent collaboration is moving towards developing more adaptive and responsive strategies for effective teamwork. Recent research has focused on creating agents that can learn to represent, categorize, and adapt to a broad range of potential partner strategies in real-time, enabling them to coordinate with humans more effectively. This includes the use of variational autoencoders to learn latent strategy spaces, clustering to identify distinct strategy types, and regret minimization algorithms to dynamically infer and adjust partner strategy estimations. Another area of research is structured imitation learning, which combines generative single-agent policy learning with game-theoretic structures to learn interactive policies that coordinate with humans in shared spaces. Noteworthy papers in this area include: Adaptively Coordinating with Novel Partners via Learned Latent Strategies, which introduces a strategy-conditioned cooperator framework that achieves state-of-the-art performance in a complex collaborative cooking environment. DiffFP: Learning Behaviors from Scratch via Diffusion-based Fictitious Play, which proposes a fictitious play framework that estimates the best response to unseen opponents while learning a robust and multimodal behavioral policy, demonstrating up to 3x faster convergence and 30x higher success rates on average against RL-based baselines.

Sources

Adaptively Coordinating with Novel Partners via Learned Latent Strategies

Structured Imitation Learning of Interactive Policies through Inverse Games

DiffFP: Learning Behaviors from Scratch via Diffusion-based Fictitious Play

Informative Communication of Robot Plans

I've Changed My Mind: Robots Adapting to Changing Human Goals during Collaboration

Built with on top of