Using Large Language Models for Process Analysis: Identifying Deviations by Comparing Process Models - A Test Report

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models are becoming a promising tool in Business Process Management for analyzing and validating process models. In this study, LLMs were evaluated by comparing correct reference models with systematically modified variants that contained either typical modeling mistakes or harmless variations such as layout or wording changes. The results show that LLMs can reliably distinguish between structural or semantic errors and acceptable variations, demonstrating strong potential for automated model validation. Their performance is shaped by factors such as model complexity, the type and encoding of deviations, and the structure of the comparison setting. Reliability improves when models are provided in clear formats and guided with precise instructions, while extended or continuous interactions across multiple comparisons reduce consistency. At the same time, language differences have little impact, and even multiple models can be processed effectively when context is provided in a controlled way. Taken together, these findings suggest that LLMs can provide meaningful support in process model validation, offering efficiency and abstraction beyond manual comparison, while still requiring careful setup and interaction design to ensure dependable outcomes.

Article activity feed