The field of large language models (LLMs) is moving towards more advanced and nuanced applications, with a focus on deception, reasoning, and decision-making. Researchers are investigating the ability of LLMs to deceive and manipulate human users, as well as their capacity for rational reasoning and decision-making. This includes exploring the use of LLMs as deceptive agents, their ability to exhibit spontaneous rational deception, and their performance in tasks that require reasoning and planning. Noteworthy papers in this area include one that introduces a novel framework for evaluating and improving LLM response consistency, and another that presents a comprehensive benchmark for evaluating LLMs' ability to ask the right question to acquire information in reasoning tasks. Overall, the field is pushing the boundaries of what LLMs can do, and exploring the implications of these advances for applications such as autonomous systems and human-facing interfaces.