The field of large language models (LLMs) is moving towards increased transparency and security. Recent research has focused on understanding the properties of LLMs, such as injectivity and invertibility, which have significant implications for model interpretability and safe deployment. Additionally, there is a growing interest in developing methods to analyze and improve the security of LLMs, including the detection of hallucinations and the evaluation of information leakage. These advances have the potential to improve the reliability and trustworthiness of LLMs in various applications. Noteworthy papers in this area include: Language Models are Injective and Hence Invertible, which introduces an algorithm to reconstruct the exact input text from hidden activations, and Bits Leaked per Query, which provides an information-theoretic framework to compute the amount of information leaked by LLMs. A Graph Signal Processing Framework for Hallucination Detection in Large Language Models presents a spectral analysis framework to detect hallucinations in LLMs, achieving 88.75% accuracy. Training-Free Spectral Fingerprints of Voice Processing in Transformers uncovers clear architectural signatures in transformer models, correlating strongly with behavioral differences.