Datasets:
license: apache-2.0
language:
- en
tags:
- benchmark
- business-process-automation
- workflow
- bpmn
- dmn
- nl-to-workflow
- python-ir
- api-grounding
- incremental-editing
- enterprise-automation
- structured-generation
- tool-use
task_categories:
- text-generation
language_creators:
- expert-generated
annotations_creators:
- expert-generated
size_categories:
- n<1K
Flow-Bench
A benchmark for translating natural language into enterprise business process workflows (BPMN/DMN).
Quick links: Paper | Approach | Videos | How to Cite | Contributors
Problem Statement: The NL‑to‑BPMN Gap in Enterprise Automation
Enterprises rely on business process automation (BPA) to standardize work and reduce operational risk, but even low‑code tools still demand engineers for custom integrations and data transformations, and authoring in Business Process Model and Notation (BPMN) or Decision Model and Notation (DMN) remains verbose and brittle. Large language models promise natural‑language authoring, yet they struggle to produce correct, standards‑compliant workflows due to BPMN verbosity, scarce BPA‑specific training data, and the lack of a widely accepted benchmark for NL‑to‑workflow quality. In practice, robust automations must ground steps in real API catalogs to prevent non‑existent actions, represent human “user tasks” when no API exists, and support incremental, multi‑turn edits (add/delete/replace, conditionals and loops)—capabilities that current resources and evaluations underserve.
Why FLOW‑BENCH?
- Benchmarking the right thing: Evaluates NL→workflow generation with API grounding, user tasks, and incremental edits.
- Standards‑aware: Targets BPMN/DMN outputs used in real enterprise tooling.
- Research & practical: Useful for model comparison and for practitioners validating NL‑authored automations.
Dataset
The dataset is in support of our approach to utilize LLMs to translate natural language into an intermediate representation with Python syntax that facilitates final conversion into widely adopted business process definition languages.
The approach and the methodology that was used to create and validate the dataset can be found in the arxiv paper
The dataset consists of 101 incremental build test cases targeted at supporting and evaluating approaches and research in natural language-driven business process automation.
To ensure compact and clear representations of prior context and expected workflows, FLOW-BENCH adopts a constrained subset of Python syntax. This subset includes assignment statements, conditional statements (if-statements), loops (for and while), and function calls.
The conditional_ootb.yaml file contains the 101 tests. An example test is shown below:
- _metadata:
tags:
- "97"
- conditional_update
- conditional_update_replace
uid: 97
input:
utterance: |-
Instead of retrieving all the issues just create a new issue in each repo
prior_sequence:
- |-
repositories = GitHub_Repository__3_0_0__retrievewithwhere_Repository()
for repo in repositories:
new_issue = GitHub_Issue__3_0_0__retrievewithwhere_Issue()
prior_context: []
bpmn:
$ref: "context/uid_97_context.bpmn"
expected_output:
sequence:
- |-
repositories = GitHub_Repository__3_0_0__retrievewithwhere_Repository()
for repo in repositories:
updated_issue = GitHub_Issue__3_0_0__create_Issue()
bpmn:
$ref: "output/uid_97_output.bpmn"
The example contains metadata along with tags that outline whether the test is conditional, or linear as well as if its update, delete or implicitly creation.
The prior_sequence contains pythonic syntax representation of the previously created BPMN. bpmn points to the corresponding BPMN representations available in the context folder.
The expected_output contains the groud truth pythonic syntax representation as well as a reference to the bpmn representation which can be found in the output folder.
The ootb_catalog.json file contains the unique identified id as well as the description of the API. An example is shown below
{
"id": "bambooHR_benefits__2_0_0__retrievewithwhere_benefits",
"description": "Retrieve all the benefit deduction types"
}
Approach
For details on the approach to generate flows and the evaluation results on the tests suite refer to Sections 3 and 4 of the arxiv paper, respectively.
Videos
Here are some videos showcasing our approach for multiple use cases.
https://github.com/user-attachments/assets/f0c21b29-b6cd-43c4-ae5b-44fd89faf945
https://github.com/user-attachments/assets/f74b33d4-246f-407d-a5d6-5b22efb434df
https://github.com/user-attachments/assets/e2c6b1b1-3c08-481f-a4e1-0876870d555c
How to Cite
@inproceedings{duesterwald2025flow,
title={FLOW-BENCH: Towards Conversational Generation of Enterprise Workflows},
author={Duesterwald, Evelyn and Huo, Siyu and Isahagian, Vatche and Jayaram, KR and Kumar, Ritesh and Muthusamy, Vinod and Oum, Punleuk and Saha, Debashish and Thomas, Gegi and Venkateswaran, Praveen},
booktitle={Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track},
pages={1426--1436},
year={2025}
}
Contributors
In alphabetical order
- Evelyn Duesterwald
- Siyu Huo
- Vatche Isahagian
- K.R. Jayaram
- Ritesh Kumar
- Vinod Muthusamy
- Punleuk Oum
- Debashish Saha
- Gegi Thomas
- Praveen Venkateswaran
IBM Public Repository Disclosure:
All content in this repository including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.