The field of language modeling and multi-agent systems is witnessing significant advancements, driven by innovative approaches to inference-time scaling, emergent communication, and collective behavior modeling. A key direction of research is the development of more efficient and effective methods for sampling and generating text, with a focus on reward-guided settings and decentralized multi-agent environments. The integration of symbolic and connectionist AI is also an area of growing interest, with potential applications in areas such as language understanding and generation. Notable papers in this area include:
- Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling, which introduces a novel approach to inference-time scaling for discrete diffusion models.
- AI Mother Tongue: Self-Emergent Communication in MARL via Endogenous Symbol Systems, which demonstrates the emergence of symbolic communication in multi-agent systems without external inductive biases.