The field of natural language processing is witnessing significant developments in understanding and interpreting large language models (LLMs). Recent research has focused on analyzing the internal workings of LLMs, including their ability to represent linguistic information, encode factual knowledge, and make predictions. Studies have shown that LLMs can be decomposed into interpretable components, revealing insights into their decision-making processes and the role of different components in storing and retrieving information. Furthermore, new methods have been proposed to represent sentence embeddings in a more transparent and controllable manner. Notably, the use of self-supervised objectives has been explored for improving low-resource morphological inflection tasks. Overall, the field is moving towards a deeper understanding of LLMs, enabling the development of more reliable and interpretable models. Noteworthy papers include:
- Large Language Models are Locally Linear Mappings, which demonstrates that LLMs can be mapped to equivalent linear systems, providing insights into their internal representations.
- Mechanistic Decomposition of Sentence Representations, which proposes a method to decompose sentence embeddings into interpretable components, bridging token-level and sentence-level analysis.