File size: 3,687 Bytes
4a96ea9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
license: mit
task_categories:
- text-generation
tags:
- agents
- tool-use
- benchmark
- enterprise-api
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: test_id
    dtype: string
  - name: test_name
    dtype: string
  - name: service
    dtype: string
  - name: task_horizon
    dtype: int64
  - name: operation_type
    dtype: string
  - name: entity_scope
    dtype: string
  - name: information_availability
    dtype: string
  - name: prompt_ambiguity
    dtype: string
  - name: info
    dtype: string
  splits:
  - name: train
    num_bytes: 256049
    num_examples: 179
  - name: test
    num_bytes: 74705
    num_examples: 45
  download_size: 124036
  dataset_size: 330754
---

# Agent-Diff Bench

[**Website**](https://agentdiff.dev) | [**Paper**](https://huggingface.co/papers/2602.11224) | [**GitHub**](https://github.com/agent-diff-bench/agent-diff)

Agent-Diff is a benchmarking framework for evaluating agentic Large Language Models (LLMs) on real-world tasks that execute code via external APIs. The benchmark provides access to real API interfaces (Slack, Box, Linear, Google Calendar) while sandboxing the environment in which calls are made and evaluated.

## Dataset Summary

The dataset contains 224 tasks utilizing enterprise software workflows, provided with an 80/20 train/test split. It introduces a **state-diff contract**, which separates process from outcome — task success is defined as whether the expected change in environment state was achieved, rather than fuzzy trace or parameter matching.

- **Services**: Slack, Linear, Box, Google Calendar.
- **Evaluation**: State-diff based (comparing "before" and "after" snapshots of the sandboxed environment).

## Sample Usage

The following example demonstrates how to run evaluations using the `agent-diff` SDK as found in the [GitHub repository](https://github.com/agent-diff-bench/agent-diff):

```python
from agent_diff import AgentDiff, PythonExecutorProxy, create_openai_tool
from agents import Agent, Runner

client = AgentDiff()

# List test suites (e.g., "Slack Bench")
suite_list = client.list_test_suites(name="Slack Bench")
slack_suite = suite_list.testSuites[0]
suite = client.get_test_suite(slack_suite.id, expand=True)

for test in suite.tests:
    prompt = test.prompt
    test_id = test.id

    # Initialise isolated environment
    env = client.init_env(testId=test_id)

    # Start the run (takes a snapshot before execution)
    run = client.start_run(envId=env.environmentId, testId=test_id)

    # Setup agent with proxied code execution tool
    python_executor = PythonExecutorProxy(env.environmentId)
    python_tool = create_openai_tool(python_executor)

    agent = Agent(
        name="Slack Assistant",
        instructions="Use execute_python tool to interact with Slack API. Authentication is handled automatically.",
        tools=[python_tool]
    )

    # Run the agent on the task
    response = await Runner.run(agent, prompt)

    # Compute evaluation based on state-diff
    client.evaluate_run(runId=run.runId)
    run_result = client.get_results_for_run(runId=run.runId)

    print(f"Test: {test_id}, Score: {run_result.score}")

    # Clean up
    client.delete_env(envId=env.environmentId)
```

## Citation

```bibtex
@article{pysklo2025agentdiff,
  title={Agent-Diff: Benchmarking LLM Agents on Enterprise API Tasks via Code Execution with State-Diff-Based Evaluation},
  author={Hubert Marek Pysklo and others},
  journal={arXiv preprint arXiv:2602.11224},
  year={2025}
}
```