Add paper link, GitHub repository, and improve dataset card description

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +122 -43
README.md CHANGED
@@ -1,43 +1,122 @@
1
- ---
2
- license: mit
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- - split: test
9
- path: data/test-*
10
- dataset_info:
11
- features:
12
- - name: question
13
- dtype: string
14
- - name: answer
15
- dtype: string
16
- - name: test_id
17
- dtype: string
18
- - name: test_name
19
- dtype: string
20
- - name: service
21
- dtype: string
22
- - name: task_horizon
23
- dtype: int64
24
- - name: operation_type
25
- dtype: string
26
- - name: entity_scope
27
- dtype: string
28
- - name: information_availability
29
- dtype: string
30
- - name: prompt_ambiguity
31
- dtype: string
32
- - name: info
33
- dtype: string
34
- splits:
35
- - name: train
36
- num_bytes: 256049
37
- num_examples: 179
38
- - name: test
39
- num_bytes: 74705
40
- num_examples: 45
41
- download_size: 124036
42
- dataset_size: 330754
43
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - agents
7
+ - tool-use
8
+ - benchmark
9
+ - enterprise-api
10
+ configs:
11
+ - config_name: default
12
+ data_files:
13
+ - split: train
14
+ path: data/train-*
15
+ - split: test
16
+ path: data/test-*
17
+ dataset_info:
18
+ features:
19
+ - name: question
20
+ dtype: string
21
+ - name: answer
22
+ dtype: string
23
+ - name: test_id
24
+ dtype: string
25
+ - name: test_name
26
+ dtype: string
27
+ - name: service
28
+ dtype: string
29
+ - name: task_horizon
30
+ dtype: int64
31
+ - name: operation_type
32
+ dtype: string
33
+ - name: entity_scope
34
+ dtype: string
35
+ - name: information_availability
36
+ dtype: string
37
+ - name: prompt_ambiguity
38
+ dtype: string
39
+ - name: info
40
+ dtype: string
41
+ splits:
42
+ - name: train
43
+ num_bytes: 256049
44
+ num_examples: 179
45
+ - name: test
46
+ num_bytes: 74705
47
+ num_examples: 45
48
+ download_size: 124036
49
+ dataset_size: 330754
50
+ ---
51
+
52
+ # Agent-Diff Bench
53
+
54
+ [**Website**](https://agentdiff.dev) | [**Paper**](https://huggingface.co/papers/2602.11224) | [**GitHub**](https://github.com/agent-diff-bench/agent-diff)
55
+
56
+ Agent-Diff is a benchmarking framework for evaluating agentic Large Language Models (LLMs) on real-world tasks that execute code via external APIs. The benchmark provides access to real API interfaces (Slack, Box, Linear, Google Calendar) while sandboxing the environment in which calls are made and evaluated.
57
+
58
+ ## Dataset Summary
59
+
60
+ The dataset contains 224 tasks utilizing enterprise software workflows, provided with an 80/20 train/test split. It introduces a **state-diff contract**, which separates process from outcome — task success is defined as whether the expected change in environment state was achieved, rather than fuzzy trace or parameter matching.
61
+
62
+ - **Services**: Slack, Linear, Box, Google Calendar.
63
+ - **Evaluation**: State-diff based (comparing "before" and "after" snapshots of the sandboxed environment).
64
+
65
+ ## Sample Usage
66
+
67
+ The following example demonstrates how to run evaluations using the `agent-diff` SDK as found in the [GitHub repository](https://github.com/agent-diff-bench/agent-diff):
68
+
69
+ ```python
70
+ from agent_diff import AgentDiff, PythonExecutorProxy, create_openai_tool
71
+ from agents import Agent, Runner
72
+
73
+ client = AgentDiff()
74
+
75
+ # List test suites (e.g., "Slack Bench")
76
+ suite_list = client.list_test_suites(name="Slack Bench")
77
+ slack_suite = suite_list.testSuites[0]
78
+ suite = client.get_test_suite(slack_suite.id, expand=True)
79
+
80
+ for test in suite.tests:
81
+ prompt = test.prompt
82
+ test_id = test.id
83
+
84
+ # Initialise isolated environment
85
+ env = client.init_env(testId=test_id)
86
+
87
+ # Start the run (takes a snapshot before execution)
88
+ run = client.start_run(envId=env.environmentId, testId=test_id)
89
+
90
+ # Setup agent with proxied code execution tool
91
+ python_executor = PythonExecutorProxy(env.environmentId)
92
+ python_tool = create_openai_tool(python_executor)
93
+
94
+ agent = Agent(
95
+ name="Slack Assistant",
96
+ instructions="Use execute_python tool to interact with Slack API. Authentication is handled automatically.",
97
+ tools=[python_tool]
98
+ )
99
+
100
+ # Run the agent on the task
101
+ response = await Runner.run(agent, prompt)
102
+
103
+ # Compute evaluation based on state-diff
104
+ client.evaluate_run(runId=run.runId)
105
+ run_result = client.get_results_for_run(runId=run.runId)
106
+
107
+ print(f"Test: {test_id}, Score: {run_result.score}")
108
+
109
+ # Clean up
110
+ client.delete_env(envId=env.environmentId)
111
+ ```
112
+
113
+ ## Citation
114
+
115
+ ```bibtex
116
+ @article{pysklo2025agentdiff,
117
+ title={Agent-Diff: Benchmarking LLM Agents on Enterprise API Tasks via Code Execution with State-Diff-Based Evaluation},
118
+ author={Hubert Marek Pysklo and others},
119
+ journal={arXiv preprint arXiv:2602.11224},
120
+ year={2025}
121
+ }
122
+ ```