The field of natural language processing is witnessing a significant shift towards the development of large language models (LLMs) that can reason over annotated corpora and produce interpretable results. Recent studies have demonstrated the potential of LLMs in streamlining the process of grammatical analysis, enabling the automation of corpus-based inquiry, and shedding light on the sequential nature of computations in biological and artificial neural networks. The integration of LLMs with structured linguistic data has shown promising results, offering a first step towards scalable automation of grammatical inquiry. Furthermore, research has revealed that different syntactic phenomena recruit shared or distinct components in LLMs, suggesting that syntactic agreement constitutes a meaningful functional category for LLMs. Noteworthy papers in this area include:
- A study that introduced an agentic framework for corpus-grounded grammatical analysis, demonstrating the feasibility of combining LLM reasoning with structured linguistic data.
- A study that explored the sequential nature of computations in LLMs and the human brain, confirming that LLMs and the brain generate representations in a similar order.
- A study that presented a knowledge-based language model, demonstrating the successful acquisition of discrete grammatical categories by a child agent in a multi-agent language acquisition simulation.
- A study that investigated whether different syntactic phenomena recruit shared or distinct components in LLMs, revealing that syntactic agreement constitutes a meaningful category within LLMs' representational spaces.