The field of large language models is moving towards improving hallucination detection and task vector composition. Recent research has focused on developing innovative methods to detect hallucination and out-of-distribution errors, such as using spectral geometry of hidden activations and variational task vector composition. These approaches have shown promising results in improving the accuracy and efficiency of large language models. Notably, the use of probability signatures to bridge data semantics and embedding structure has also been explored, providing new insights into the relationship between embedding organization and semantic patterns. Noteworthy papers include: EigenTrack, which proposes a real-time detector for hallucination and out-of-distribution errors using spectral geometry of hidden activations. Variational Task Vector Composition, which introduces a Bayesian inference framework for task vector composition, promoting sparsity and preserving informative components. Global Minimizers of Sigmoid Contrastive Loss, which theoretically explains the advantages of synchronizing with trainable inverse temperature and bias under the sigmoid loss.