The field of natural language processing is witnessing significant advancements in neural semantic parsing and language model development. Researchers are exploring the capabilities and limitations of neural semantic parsers in resolving complex linguistic phenomena, such as ellipsis resolution. Meanwhile, the development of language models is being studied through the lens of embryology, revealing the emergence of internal computational structures and novel mechanisms. The neurocognitive mechanisms underlying syntax are also being investigated, with findings suggesting that distinct mechanisms support different types of syntactic constructions. Noteworthy papers in this area include:
- The paper on Embryology of a Language Model, which introduces a novel approach to visualizing the structural development of language models.
- The paper on Evaluation of LLMs in AMR Parsing, which demonstrates the competitive performance of finetuned large language models in semantic parsing tasks.
- The paper on Pruning Large Language Models by Identifying and Preserving Functional Networks, which proposes a method for efficient model pruning by preserving functional networks within language models.