pith. machine review for the scientific record. sign in

arxiv: 2604.27311 · v1 · submitted 2026-04-30 · 💻 cs.SE · cs.AI

Recognition: unknown

Pragmos: A Process Agentic Modeling System

Luciano Garc\'ia-Ba\~nuelos, Pedro-Aar\'on Hern\'andez-\'Avalos

Authors on Pith no claims yet

Pith reviewed 2026-05-07 10:14 UTC · model grok-4.3

classification 💻 cs.SE cs.AI
keywords business process modelinglarge language modelshybrid AI systemsexplainable workflowsagentic modelingprocess managementsoftware engineeringconversational AI
0
0 comments X

The pith

A hybrid system of LLMs and specialized tools generates sound and comprehensible process models via transparent incremental steps.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper contends that business process modeling from textual descriptions is too complex for black-box LLM solutions. It proposes instead an interactive, step-by-step approach where modeling decisions are broken down, rationales are documented, and simple behavioral relations are uncovered to guide the process. Specialized tools then complement the LLMs by structuring the models based on these relations, resulting in sound and explainable outputs. Pragmos is introduced as a prototype that puts this hybrid human-LLM-tool collaboration into practice for co-creating evolving process models. This matters for software engineering because it offers a way to leverage AI while maintaining user control and model quality.

Core claim

The authors claim that by decomposing the modeling task into smaller manageable steps that produce intermediate artifacts and explicitly document the rationale for each decision, and by incrementally uncovering simple behavioral relations that guide model construction with the help of specialized tools, it is possible to generate sound yet comprehensible process models that evolve through transparent and explainable steps, as demonstrated in the Pragmos prototype.

What carries the argument

The Pragmos prototype system, which decomposes process modeling into an open-ended conversational workflow using LLMs for collaboration and specialized tools for handling behavioral relations to ensure model soundness.

Load-bearing premise

Current limitations of LLMs with complex dependencies can be overcome by complementing them with specialized tools that structure models from incrementally uncovered behavioral relations.

What would settle it

Observing whether models produced by Pragmos on realistic process descriptions contain undetected soundness issues or lack clear rationales for key constructs when reviewed by domain experts.

Figures

Figures reproduced from arXiv: 2604.27311 by Luciano Garc\'ia-Ba\~nuelos, Pedro-Aar\'on Hern\'andez-\'Avalos.

Figure 1
Figure 1. Figure 1: Caption [17] and [18] present a method for synthesizing a BPMN model from an or￾dering relation graph. The method proceeds by incrementally decomposing the ORG into subgraphs, which can be classified in one of the following types: i) a trivial, corresponding with a single node, ii) a complete whenever the subgraph corresponds to a complete (sub)graph (viz., a clique) or, symmetrically, a fully disconnected… view at source ↗
Figure 2
Figure 2. Figure 2: Pragmos’ overall approach view at source ↗
Figure 3
Figure 3. Figure 3: Discovery of execution paths The list of execution paths generated by the LLM serves as the source for uncovering the causal, conflict and concurrency relations. To explain the under￾lying procedure, we build the so-called directly-follows graph (DFG) [2] used in process mining. A DFG is directed graph where each node represents an activity and each edge captures the fact that the activity at the source of… view at source ↗
Figure 4
Figure 4. Figure 4: Directly follows graph a b c d e f view at source ↗
Figure 5
Figure 5. Figure 5: Ordering relations graph a b c d e f view at source ↗
Figure 7
Figure 7. Figure 7: Resulting BPMN process model view at source ↗
Figure 8
Figure 8. Figure 8: Discovery of concurrency relation Activities a ”Receive Order” b ”Reject Order” c ”Accept Order” d ”Inform Storehouse” Activities e ”Inform Engineering” f ”Process Part List” g ”Reserve Parts” h ”Backorder Parts” Activities i ”Complete Preparation” j ”Assemble Bicycle” k ”Ship Bicycle” view at source ↗
Figure 9
Figure 9. Figure 9: BPMN process model produced in step 1 from process description in Fig. 8 view at source ↗
Figure 10
Figure 10. Figure 10: Frags. of modular decomposition trees and ordering relations graph of the running example view at source ↗
Figure 11
Figure 11. Figure 11: BPMN model produced in step 2 concurrency in the ordering relations graph is straightforward, and generating the BPMN out of the ORG is properly handled by Algorithm ??. Discovering Loops The third step in the procedure corresponds with discov￾ering fragments of the process incurring in repetitive behavior. As stated before, the presence of loops may induce some difficulties in the discovery of the proces… view at source ↗
Figure 12
Figure 12. Figure 12: Discovery of loops It is worth recalling that an ordering relations graph and its modular de￾composition can only be associated to an acyclic process model. To overcome this limitation, here we annotate modules in a modular decomposition tree to indicate that the underlying activities are enclosed within a repeat-like loop. Let view at source ↗
Figure 12
Figure 12. Figure 12: We can see that the process description contains a single loop, involving view at source ↗
Figure 13
Figure 13. Figure 13: MDT annotated with a loop a b c d e f g h i j k view at source ↗
Figure 14
Figure 14. Figure 14: Final BPMN process model With this recently added information, we can rewrite the BPMN model in￾cluding the loop as shown in view at source ↗
Figure 15
Figure 15. Figure 15: Process description: Computer repair hardware” and “Repair hardware”, and two other activities to deal with soft￾ware related problems, i.e., “Check software” and “Configure software”. There is another activity, i.e. “Test system functionality”, which is performed after any hardware or software interventions. As a consequence, the LLM adds “Test system functionality” twice in an execution path, which will… view at source ↗
Figure 16
Figure 16. Figure 16: Process description: Online exam Log into universty website Complete online exam Grade exam Register grade view at source ↗
Figure 17
Figure 17. Figure 17: BPMN model, manually derived from description in Fig. 16 view at source ↗
Figure 18
Figure 18. Figure 18: BPMN model generated by Pragmos the LLM uses to avoid loops. However, this decision challenges Pragmos’ capa￾bilities so far. We use this example to make evident two ”control flow topologies” which could emerge in other contexts, posing challenges to Pragmos, and present some extensions to the procedure so far to cope with such challenges. Execution path 1 Log into university website a Complete online exa… view at source ↗
Figure 19
Figure 19. Figure 19: Directly-follows graph view at source ↗
read the original abstract

The advent of Large Language Models (LLMs) has significantly transformed tasks across Software Engineering. In the context of Business Process Management, LLMs are now being explored as tools to derive process models directly from textual descriptions. Existing approaches range from chatbot-driven systems that assist with iterative, text-based modeling to fully automated end-to-end modeling assistants. However, we argue that process modeling is inherently complex and cannot be effectively addressed through black-box solutions. Instead, we envision modeling as an open-ended conversational activity, best supported by an interactive, iterative process involving both humans and LLM. In our approach, the modeling task is decomposed into smaller, manageable steps. Each step results in intermediate artifacts and explicitly documents the rationale behind each modeling decision. During this process, we incrementally uncover simple behavioral relations that guide the construction of the model. Given the current limitations of LLMs in reasoning about complex dependencies, we complement them with specialized tools developed in the field to structure process models based on behavioral relations. This hybrid approach enables the generation of sound, yet comprehensible models that evolve through transparent and explainable steps. In this paper, we present our research agenda and introduce Pragmos, a prototype system that operationalizes this vision. Pragmos demonstrates how LLMs can collaborate with human users as both domain and modeling experts to co-create evolving process models through a structured and explainable workflow.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper presents a research agenda and prototype (Pragmos) for an interactive, hybrid LLM-plus-specialized-tool workflow in business process modeling. It decomposes modeling into incremental steps that produce intermediate artifacts, document rationales, and uncover behavioral relations, arguing that this yields sound yet comprehensible and explainable process models superior to black-box LLM approaches.

Significance. If the proposed workflow can be shown to produce verifiable models, it would address a genuine gap in LLM-assisted BPM by emphasizing transparency and incremental structure; the emphasis on complementing LLM limitations with established behavioral-relation tools is a constructive direction.

major comments (2)
  1. [Abstract] Abstract: the statement that the hybrid approach 'enables the generation of sound, yet comprehensible models' is presented as an achieved outcome rather than a hypothesis; no evaluation protocol, soundness criteria, or even illustrative case study is supplied to support this central claim.
  2. [Pragmos prototype section] Prototype description: the manuscript introduces Pragmos as operationalizing the vision but provides no concrete specification of the specialized tools, the exact interface for incremental behavioral-relation extraction, or how soundness is checked at each step, rendering the prototype non-reproducible from the text alone.
minor comments (1)
  1. [Introduction / Related Work] The related-work discussion could more explicitly contrast the proposed incremental approach with existing chatbot-driven and end-to-end LLM modeling systems cited in the introduction.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our research agenda and prototype paper. We address each major comment below and outline planned revisions to improve clarity and completeness.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the statement that the hybrid approach 'enables the generation of sound, yet comprehensible models' is presented as an achieved outcome rather than a hypothesis; no evaluation protocol, soundness criteria, or even illustrative case study is supplied to support this central claim.

    Authors: We agree that the abstract phrasing presents the benefit as an achieved result. The manuscript is a research agenda paper that introduces the vision and an initial prototype rather than reporting completed empirical validation. We will revise the abstract to frame the hybrid approach as enabling sound and comprehensible models as a proposed outcome and hypothesis to be tested in future work, while clarifying that the current contribution focuses on the structured workflow and prototype design without supplying a full evaluation protocol or case study. revision: yes

  2. Referee: [Pragmos prototype section] Prototype description: the manuscript introduces Pragmos as operationalizing the vision but provides no concrete specification of the specialized tools, the exact interface for incremental behavioral-relation extraction, or how soundness is checked at each step, rendering the prototype non-reproducible from the text alone.

    Authors: We acknowledge that the prototype section provides only a high-level description. As this is an early-stage prototype for a research agenda, the manuscript does not include exhaustive implementation details. In revision we will expand the section with additional concrete information on the behavioral-relation tools drawn from the process mining literature, the incremental extraction interface, and the step-wise soundness mechanisms based on the documented behavioral relations. We will also add a link to a public repository containing the current prototype code to support reproducibility where textual description alone is insufficient. revision: yes

Circularity Check

0 steps flagged

No significant circularity; conceptual proposal without derivations

full rationale

The paper is framed explicitly as a research agenda and prototype description rather than a completed study with quantitative derivations or predictions. No equations, fitted parameters, or formal derivation chains exist that could reduce to self-definitions, renamed inputs, or self-citation load-bearing premises. The central vision—that a hybrid LLM-plus-tool workflow yields sound and explainable models—is presented as an intended outcome of the proposed system, not as a result already demonstrated or derived within the manuscript. General observations about LLM limitations and BPM practices are invoked without reliance on unverified prior results by the same authors, rendering the argument self-contained and non-circular.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the domain assumption that process modeling cannot be solved by black-box LLMs alone and that incremental behavioral relations plus external tools suffice to produce sound models; Pragmos itself is the main invented entity.

axioms (1)
  • domain assumption Process modeling is inherently complex and cannot be effectively addressed through black-box solutions.
    Explicitly stated as the motivation for the hybrid interactive approach.
invented entities (1)
  • Pragmos no independent evidence
    purpose: Prototype system that operationalizes the interactive, step-wise, tool-augmented modeling workflow.
    New named system presented as the concrete realization of the research agenda.

pith-pipeline@v0.9.0 · 5550 in / 1274 out tokens · 68975 ms · 2026-05-07T10:14:13.472589+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

22 extracted references · 3 canonical work pages · 3 internal anchors

  1. [1]

    Introducing ChatGPT (November 2022),https://openai.com/index/chatgpt/, last accessed in Feb 2026

  2. [2]

    WIREs Data Mining Knowl

    van der Aalst, W.M.P.: Process discovery from event data: Relating models and logs through abstractions. WIREs Data Mining Knowl. Discov.8(3) (2018)

  3. [3]

    IEEE Trans

    van der Aalst, W.M.P., Weijters, T., Maruster, L.: Workflow Mining: Discovering Process Models from Event Logs. IEEE Trans. Knowl. Data Eng.16(9), 1128–1142 (2004)

  4. [4]

    In: Proc

    Bellan, P., van der Aa, H., Dragoni, M., Ghidini, C., Ponzetto, S.P.: PET: an annotated dataset for process extraction from natural language text tasks. In: Proc. of BPM 2022 Workshops. LNBIP, vol. 460, pp. 315–321. Springer (2022)

  5. [5]

    In: Matulevicius, R., Dijkman, R.M

    Bock, J.D., Claes, J.: The Origin and Evolution of Syntax Errors in Simple Se- quence Flow Models in BPMN. In: Matulevicius, R., Dijkman, R.M. (eds.) Proc. of CAiSE Workshops 2018. LNBIP, vol. 316, pp. 155–166. Springer (2018)

  6. [6]

    Brown, P.F., Pietra, V.J.D., de Souza, P.V., Lai, J.C., Mercer, R.L.: Class-Based n-gram Models of Natural Language. Comput. Linguistics18(4), 467–479 (1992)

  7. [7]

    Language Models are Few-Shot Learners

    Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Nee- lakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D.M., Wu, J., Win- ter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford...

  8. [8]

    Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models

    Chen, Q., Qin, L., Liu, J., Peng, D., Guan, J., Wang, P., Hu, M., Zhou, Y., Gao, T., Che, W.: Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models. CoRRabs/2503.09567(2025),https://doi. org/10.48550/arXiv.2503.09567

  9. [9]

    Springer (2018)

    Dumas, M., Rosa, M.L., Mendling, J., Reijers, H.A.: Fundamentals of Business Process Management, Second Edition. Springer (2018)

  10. [10]

    In: Proc

    Friedrich, F., Mendling, J., Puhlmann, F.: Process Model Generation from Natural Language Text. In: Proc. of CAiSE 2011. LNCS, vol. 6741, pp. 482–496. Springer (2011)

  11. [11]

    Reijers and Jan Mendling: Styles in business process modeling: an exploration and a model

    Jakob Pinggera and Pnina Soffer and Dirk Fahland and Matthias Weidlich and Stefan Zugal and Barbara Weber and Hajo A. Reijers and Jan Mendling: Styles in business process modeling: an exploration and a model. Softw. Syst. Model.14(3), 1055–1080 (2015) 22 P.A. Hern´ andez and L. Garc´ ıa-Ba˜ nuelos

  12. [12]

    In: Proc

    Klievtsova, N., Benzin, J., Kampik, T., Mangler, J., Rinderle-Ma, S.: Conver- sational Process Modelling: State of the Art, Applications, and Implications in Practice. In: Proc. of Business Process Management Forum - BPM 2023. LNBIP, vol. 490, pp. 319–336. Springer (2023)

  13. [13]

    Leopold, H.: Natural Language in Business Process Models - Theoretical Founda- tions, Techniques, and Applications, LNBIP, vol. 168. Springer (2013)

  14. [14]

    McConnell, R.M., de Montgolfier, F.: Linear-time modular decomposition of di- rected graphs. Discret. Appl. Math.145(2), 198–209 (2005)

  15. [15]

    OpenAI: Introducing ChatGPT (2022),https://openai.com/index/chatgpt/, accessed: 2023-10-31

  16. [16]

    Oulsnam, G.: Unravelling Unstructured Programs. Comput. J.25(3), 379–387 (1982)

  17. [17]

    In: Proc

    Polyvyanyy, A., Garc´ ıa-Ba˜ nuelos, L., Dumas, M.: Structuring Acyclic Process Models. In: Proc. of BPM. LNCS, vol. 6336, pp. 276–293. Springer (2010)

  18. [18]

    Polyvyanyy, A., Garc´ ıa-Ba˜ nuelos, L., Dumas, M.: Structuring acyclic process mod- els. Inf. Syst.37(6), 518–538 (2012)

  19. [19]

    Polyvyanyy, A., Garc´ ıa-Ba˜ nuelos, L., Fahland, D., Weske, M.: Maximal Structuring of Acyclic Process Models. Comput. J.57(1), 12–35 (2014)

  20. [20]

    IEEE88(8), 1270–1278 (2000)

    Rosenfeld, R.: Two decades of statistical language modeling: where do we go from here? Proc. IEEE88(8), 1270–1278 (2000)

  21. [21]

    Attention Is All You Need

    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention Is All You Need. CoRRabs/1706.03762(2017), http://arxiv.org/abs/1706.03762

  22. [22]

    In: Proc

    Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E.H., Le, Q.V., Zhou, D.: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In: Proc. of NeurIPS 2022 (2022)