The field of large language models (LLMs) is rapidly advancing, with a focus on improving reasoning capabilities. Recent research has explored various approaches to enhance LLM reasoning, including the use of geometric frameworks, inductive reasoning, and entropy-guided methods. These approaches aim to address challenges such as ciphered reasoning, chain-of-thought monitoring, and robustness to prompt perturbations. Notable papers in this area include: ENIGMA, which introduces a novel approach to LLM training that jointly improves reasoning, alignment, and robustness. Schema for In-Context Learning, which proposes a framework that extracts the representation of building blocks of cognition for the reasoning process, creating an abstracted schema to augment a model's reasoning process. ERGO, which introduces an entropy-guided resetting method for generation optimization in multi-turn language models, improving performance and reliability in conversational AI. Flip-Flop Consistency, which proposes an unsupervised training method that improves robustness to prompt perturbations in LLMs. Code-driven Number Sequence Calculation, which enhances the inductive reasoning abilities of LLMs using a synthetic post-training dataset built from number sequences.