The field of large language models (LLMs) and process mining is rapidly evolving, with a focus on improving instruction tuning and semantics-aware process mining. Recent research has explored the potential of seed-free instruction tuning, which eliminates the need for costly human-annotated seed data or powerful external teacher models. This approach has shown promise in achieving fully automated instruction tuning, reducing biases, and improving the efficient use of unlabeled corpora. Additionally, there is a growing interest in applying LLMs to process mining tasks, such as anomaly detection, next-activity prediction, and process discovery. The use of instruction-tuning for semantics-aware process mining has demonstrated varied impact, highlighting the importance of task selection. Furthermore, resource-centric next-activity prediction has emerged as a promising approach, offering benefits such as improved work organization, workload balancing, and capacity forecasting. Noteworthy papers in this area include: CYCLE-INSTRUCT, which proposes a novel framework for fully seed-free instruction tuning via dual self-training and cycle consistency. LLMs that Understand Processes, which investigates the potential of instruction-tuning for semantics-aware process mining. Working My Way Back to You, which evaluates the effectiveness of a resource-centric approach to next-activity prediction. Reflective Agreement, which proposes a hybrid approach combining self-mixture of agents with a sequence tagger for robust event extraction.