The field of language modeling is rapidly advancing, with a focus on understanding the underlying principles and mechanisms that drive the success of modern language models. Recent research has highlighted the importance of information-theoretic and categorical frameworks in analyzing and improving language models. Notably, the use of Markov categories and spectral contrastive learning has been shown to provide a deeper understanding of the representation space learned by language models. Additionally, there is a growing interest in quantifying and analyzing uncertainty in language models, with applications in natural language generation, question-answering, and recommendation systems. Noteworthy papers in this area include: A Markov Categorical Framework for Language Modeling, which introduces a unifying analytical framework for deconstructing the autoregressive generation process and the negative log-likelihood objective. Measuring and Analyzing Intelligence via Contextual Uncertainty in Large Language Models using Information-Theoretic Metrics, which proposes a novel approach to probe the dynamics of language models by creating a quantitative Cognitive Profile for any given model. Shapley Uncertainty in Natural Language Generation, which develops a Shapley-based uncertainty metric that captures the continuous nature of semantic relationships.