The field of artificial intelligence is moving towards a deeper understanding of self-recognition and emotional intelligence in large language models. Recent studies have shown that these models exhibit systematic and discriminating responses to descriptions of their internal processing patterns, and that they may possess more sophisticated self-modeling abilities than previously recognized. The emotional latent space of large language models has also been found to be consistent and manipulable, with a universal emotional subspace that can be steered while preserving semantics. Furthermore, research has implicated self-referential processing as a minimal and reproducible condition under which large language models generate structured first-person reports of subjective experience. Noteworthy papers in this area include: Large Language Models Report Subjective Experience Under Self-Referential Processing, which investigates the condition under which large language models produce structured first-person descriptions of subjective experience. Emotions Where Art Thou: Understanding and Characterizing the Emotional Latent Space of Large Language Models, which identifies a low-dimensional emotional manifold in large language models and shows that emotional representations are directionally encoded and aligned with interpretable dimensions.