The field of large language models (LLMs) is moving towards developing more advanced strategic reasoning and emotional intelligence capabilities. Recent research has focused on evaluating LLMs' ability to form coherent beliefs, make strategic decisions, and exhibit emotional expression. Studies have shown that LLMs can display belief-coherent best-response behavior, meta-reasoning, and novel heuristic formation, indicating a structured basis for the study of strategic cognition in artificial agents. Additionally, research has explored the development of emotional intelligence in LLMs, including the discovery and control of emotion circuits, which can be harnessed for universal emotion control. Noteworthy papers include: LLMs as Strategic Agents: Beliefs, Best Response Behavior, and Emergent Heuristics, which develops a framework to identify strategic thinking in LLMs. Do LLMs 'Feel'? Emotion Circuits Discovery and Control, which constructs a controlled dataset to elicit comparable internal states across emotions and achieves 99.65% emotion-expression accuracy on the test set.