Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
License:
jishnunair commited on
Commit
fd04ab7
·
1 Parent(s): 4994a97

Added Readme.md

Browse files
Files changed (3) hide show
  1. README.md +112 -0
  2. assets/csmgym.png +3 -0
  3. assets/teaser.png +3 -0
README.md CHANGED
@@ -279,3 +279,115 @@ configs:
279
  - split: teams
280
  path: plus_5_tools/teams-*
281
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
279
  - split: teams
280
  path: plus_5_tools/teams-*
281
  ---
282
+ <div align="center">
283
+
284
+ <h1><img src="assets/csmgym.png" alt="Logo" width="48" style="vertical-align:middle; margin-right:8px;" /> EnterpriseOps-Gym: Environments and Evaluations for Stateful Agentic Planning and Tool Use in Enterprise Settings</h1>
285
+
286
+ <p>
287
+ <a href="#"><img src="https://img.shields.io/badge/Website-blue?logo=google-chrome&logoColor=white" /></a>
288
+ <a href="#"><img src="https://img.shields.io/badge/Paper-red?logo=arxiv&logoColor=white" /></a>
289
+ <a href="https://github.com/ServiceNow/EnterpriseOps-Gym"><img src="https://img.shields.io/badge/GitHub-black?logo=github" /></a>
290
+ </p>
291
+
292
+ <p><i>EnterpriseOps-Gym is a containerized, resettable enterprise simulation benchmark for evaluating LLM agents on stateful, multi-step planning and tool use across realistic enterprise workflows</i></p>
293
+
294
+ </div>
295
+
296
+ ![EnterpriseOps-Gym Overview](assets/teaser.png)
297
+
298
+ ## About
299
+
300
+ **EnterpriseOps-Gym** is a large-scale benchmark for evaluating the agentic planning and tool-use capabilities of LLM agents across enterprise operations. It comprises **1,150 expert-curated tasks** spanning **8 enterprise domains**, each running against live containerized MCP servers backed by realistic, fully synthetic databases.
301
+
302
+ Unlike static QA benchmarks, EnterpriseOps-Gym evaluates agents on **final environment state** using SQL verifiers - meaning agents are rewarded for achieving the correct outcome, not for following a rigid action sequence. Tasks require long-horizon multi-step reasoning, strict policy compliance, and precise tool invocation under complex data dependencies.
303
+
304
+ > **Best model performance: 34.1% success rate** - leaving significant headroom for future research.
305
+
306
+ ## Key Features
307
+
308
+ - 🛠️ **512 tools** across 8 enterprise domains
309
+ - 🗄️ **164 database tables** with avg 1.7 foreign-key dependencies per table
310
+ - 🔢 **9.15 avg steps** per task (up to 34), with **5.3 avg verification conditions**
311
+ - 📏 **89k avg context length** per task
312
+ - 🔒 Tasks enforce **access control, policy compliance, and referential integrity**
313
+ - ✅ Evaluation is **outcome-based** via executable SQL verifiers — not action-sequence matching
314
+ - 🐳 Fully **containerized** sandbox — reproducible and isolated per task run
315
+
316
+ ## Evaluation Framework
317
+
318
+ The evaluation code is available at [ServiceNow/EnterpriseOps-Gym](https://github.com/ServiceNow/EnterpriseOps-Gym).
319
+
320
+ The framework supports:
321
+ - **Multiple orchestrators**: ReAct, Planner-ReAct, Decomposing Planner
322
+ - **Multiple LLM providers**: Anthropic, OpenAI, Azure OpenAI, Google Gemini, DeepSeek, vLLM, and more
323
+ - **Parallel execution** via [Ray](https://www.ray.io/) for large-scale runs
324
+ - **Automatic scoring** with per-task and per-mode breakdowns
325
+
326
+ ```python
327
+ from datasets import load_dataset
328
+
329
+ ds = load_dataset("ServiceNow-AI/EnterpriseOps-Gym", "oracle", split="teams")
330
+ ```
331
+
332
+ ## Domain Information
333
+
334
+ The dataset is organized by **domain** (split) and **mode** (configuration subset).
335
+
336
+ ### Domains
337
+
338
+ | Domain | Tasks | Avg Steps | Max Steps | Tools |
339
+ |--------|------:|----------:|----------:|------:|
340
+ | Calendar | 100 | 7.05 | 17 | 37 |
341
+ | CSM | 186 | 12.10 | 27 | 89 |
342
+ | Drive | 105 | 8.68 | 29 | 55 |
343
+ | Email | 104 | 6.25 | 22 | 79 |
344
+ | HR | 184 | 10.54 | 34 | 89 |
345
+ | ITSM | 181 | 9.00 | 31 | 93 |
346
+ | Teams | 100 | 9.41 | 18 | 70 |
347
+ | Hybrid | 155 | 7.79 | 19 | Multi-domain |
348
+ | **Total** | **1,115** | **9.15** | **34** | **512** |
349
+
350
+ ### Modes (Tool-Set Configurations)
351
+
352
+ Each mode controls the set of tools exposed to the agent, simulating realistic tool-retrieval scenarios:
353
+
354
+ | Mode | Description |
355
+ |------|-------------|
356
+ | `oracle` | Only the exact tools needed for the task |
357
+ | `plus_5_tools` | Oracle tools + 5 randomly sampled distractor tools |
358
+ | `plus_10_tools` | Oracle tools + 10 randomly sampled distractor tools |
359
+ | `plus_15_tools` | Oracle tools + 15 randomly sampled distractor tools |
360
+
361
+ ## Field Descriptions
362
+
363
+ Each row in the dataset corresponds to one task instance and contains the following fields:
364
+
365
+ | Field | Type | Description |
366
+ |-------|------|-------------|
367
+ | `task_id` | `string` | Unique identifier for the task |
368
+ | `domain` | `string` | Domain name (e.g., `teams`, `csm`, `hr`) |
369
+ | `system_prompt` | `string` | Agent role definition and domain-specific policies |
370
+ | `user_prompt` | `string` | Natural language task instruction |
371
+ | `verifiers` | `string` (JSON) | Array of SQL-based outcome verification scripts that check final environment state |
372
+ | `gym_servers_config` | `string` (JSON) | MCP server configuration(s) specifying which containerized gym server(s) to connect to |
373
+ | `selected_tools` | `list[string]` | Names of tools available to the agent in this mode |
374
+
375
+ ## Example Use Cases
376
+
377
+ **EnterpriseOps-Gym** can be used for:
378
+
379
+ - **Benchmarking LLM agents** on realistic enterprise workflows across IT, HR, CRM, and collaboration domains
380
+ - **Evaluating tool-use and planning** under long-horizon, multi-step, policy-constrained settings
381
+ - **Studying tool retrieval robustness** by comparing oracle vs. distractor-augmented tool modes
382
+ - **Developing new orchestration strategies** — the framework natively supports ReAct, Planner-ReAct, and Decomposing Planner
383
+ - **Studying failure modes** of state-of-the-art models on high-complexity enterprise tasks (best model: 34.1%)
384
+ - **Extending the benchmark** with new domains, tasks, or verifiers using the released Docker sandbox infrastructure
385
+
386
+ ## Citation
387
+
388
+ ```bibtex
389
+ @misc{enterpriseopsgym2026,
390
+ title = {EnterpriseOps-Gym: Environments and Evaluations for Stateful Agentic Planning and Tool Use in Enterprise Settings},
391
+ author = {},
392
+ year = {2026}
393
+ }
assets/csmgym.png ADDED

Git LFS Details

  • SHA256: 88ae96640d90963d383b33c4df8efb72243036017c90dd68b2732dd33ffb1e8a
  • Pointer size: 131 Bytes
  • Size of remote file: 209 kB
assets/teaser.png ADDED

Git LFS Details

  • SHA256: a3c6e3581eb3364721be30da7eb841b911b30ec39838fb8005936cf49f731f38
  • Pointer size: 131 Bytes
  • Size of remote file: 855 kB