The field of large language models (LLMs) is rapidly evolving, with a focus on improving their performance in speech and dialogue applications. Recent developments have highlighted the importance of adaptability, personalization, and multimodal interaction in these models. Researchers are exploring new approaches to role-playing dialogue agents, speech-based cognitive screening, and context-adaptive hearing aid fitting. The development of comprehensive benchmarks, such as TTA-Bench and VoxRole, is also underway to evaluate the performance of LLMs in these areas. Noteworthy papers include: Talk Less, Call Right, which presents a novel approach to prompting role-playing dialogue agents, and Who Gets Left Behind?, which audits disability inclusivity in LLMs. Additionally, LALM-Eval and AU-Harness provide efficient and comprehensive evaluation frameworks for large audio language models.