The field of model generation and conformance checking is moving towards leveraging large language models (LLMs) to improve accuracy and automation. Recent research has focused on developing novel frameworks and tools that utilize LLMs to generate high-quality models and evaluate their correctness. Notably, innovative approaches have been proposed to address issues such as syntax violations, constraint inconsistencies, and inaccuracy in generated models. Furthermore, there is a growing interest in developing benchmarks and evaluation metrics to assess the performance of LLMs in model generation tasks. The development of tool-assisted conformance checking solutions is also gaining traction, enabling more efficient and accurate verification of process models against reference models. Noteworthy papers include: MCeT, which proposes a fully automated tool for evaluating the correctness of behavioral models, and SysMBench, which introduces a benchmark for evaluating the ability of LLMs to generate system models from natural language requirements. These advancements have the potential to significantly impact the field, enabling more efficient and effective model generation and conformance checking.