The field of AI-powered education is moving towards more interactive and adaptive learning environments. Researchers are exploring the potential of Large Language Models (LLMs) to support teachers and students in various educational settings. A key direction is the development of pedagogical paradigms that leverage LLMs to facilitate active learning, improve student engagement, and enhance mastery of complex subjects. Another area of focus is the evaluation of LLMs' instructional guidance capabilities, with a emphasis on their ability to adapt to learners' cognitive states and provide effective scaffolding. Noteworthy papers in this area include:
- Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education, which presents a novel approach to using LLMs in education.
- Discerning minds or generic tutors, which introduces a benchmark for evaluating the instructional guidance capabilities of Socratic LLMs.
- CoDAE: Adapting Large Language Models for Education via Chain-of-Thought Data Augmentation, which proposes a framework for adapting LLMs for educational use through Chain-of-Thought data augmentation.