The field of natural language processing is witnessing significant advancements in understanding the intricacies of large language models (LLMs). Researchers are actively exploring the mechanisms that drive these models, including memorization, syntactic structure derivation, and prediction mechanisms. A key direction in this research area is the investigation of how LLMs encode and utilize syntactic information, with studies revealing a bottom-up derivation of syntactic structures across layers. Additionally, the discovery of specific circuits responsible for tasks such as verb conjugation and memorization prevention is shedding light on the complex decision-making processes within these models. Noteworthy papers in this area include those that propose novel methods for isolating and interpreting the sub-networks responsible for specific tasks, such as subject-verb agreement and compositional syntactic language models. Others have made significant contributions by introducing new toy problems to study in-context recall and developing efficient algorithms for exactly inverting language model outputs, enabling post-incident analysis and potential detection of fake output reports.