Advances in Model Generation and Conformance Checking

The field of model generation and conformance checking is moving towards leveraging large language models (LLMs) to improve accuracy and automation. Recent research has focused on developing novel frameworks and tools that utilize LLMs to generate high-quality models and evaluate their correctness. Notably, innovative approaches have been proposed to address issues such as syntax violations, constraint inconsistencies, and inaccuracy in generated models. Furthermore, there is a growing interest in developing benchmarks and evaluation metrics to assess the performance of LLMs in model generation tasks. The development of tool-assisted conformance checking solutions is also gaining traction, enabling more efficient and accurate verification of process models against reference models. Noteworthy papers include: MCeT, which proposes a fully automated tool for evaluating the correctness of behavioral models, and SysMBench, which introduces a benchmark for evaluating the ability of LLMs to generate system models from natural language requirements. These advancements have the potential to significantly impact the field, enabling more efficient and effective model generation and conformance checking.

Sources

Accurate and Consistent Graph Model Generation from Text with Large Language Models

MCeT: Behavioral Model Correctness Evaluation using Large Language Models

Tool-Assisted Conformance Checking to Reference Process Models

A System Model Generation Benchmark from Natural Language Requirements

Data Dependency Inference for Industrial Code Generation Based on UML Sequence Diagrams

Vanilla-Converter: A Tool for Converting Camunda 7 BPMN Models into Camunda 8 Models

Built with on top of