Datasets:
Add paper link, GitHub repository, and improve dataset card description
Browse filesHi, I'm Niels from the community science team at Hugging Face. I've updated the dataset card to include:
- A link to the research paper and project website.
- The official GitHub repository link.
- The `text-generation` task category and relevant tags.
- A summary of the Agent-Diff benchmark and the enterprise APIs it covers.
- A sample usage code snippet found in the GitHub repository for running evaluations.
- A BibTeX citation for researchers to cite the work.
README.md
CHANGED
|
@@ -1,43 +1,122 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
| 4 |
-
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
- name:
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
tags:
|
| 6 |
+
- agents
|
| 7 |
+
- tool-use
|
| 8 |
+
- benchmark
|
| 9 |
+
- enterprise-api
|
| 10 |
+
configs:
|
| 11 |
+
- config_name: default
|
| 12 |
+
data_files:
|
| 13 |
+
- split: train
|
| 14 |
+
path: data/train-*
|
| 15 |
+
- split: test
|
| 16 |
+
path: data/test-*
|
| 17 |
+
dataset_info:
|
| 18 |
+
features:
|
| 19 |
+
- name: question
|
| 20 |
+
dtype: string
|
| 21 |
+
- name: answer
|
| 22 |
+
dtype: string
|
| 23 |
+
- name: test_id
|
| 24 |
+
dtype: string
|
| 25 |
+
- name: test_name
|
| 26 |
+
dtype: string
|
| 27 |
+
- name: service
|
| 28 |
+
dtype: string
|
| 29 |
+
- name: task_horizon
|
| 30 |
+
dtype: int64
|
| 31 |
+
- name: operation_type
|
| 32 |
+
dtype: string
|
| 33 |
+
- name: entity_scope
|
| 34 |
+
dtype: string
|
| 35 |
+
- name: information_availability
|
| 36 |
+
dtype: string
|
| 37 |
+
- name: prompt_ambiguity
|
| 38 |
+
dtype: string
|
| 39 |
+
- name: info
|
| 40 |
+
dtype: string
|
| 41 |
+
splits:
|
| 42 |
+
- name: train
|
| 43 |
+
num_bytes: 256049
|
| 44 |
+
num_examples: 179
|
| 45 |
+
- name: test
|
| 46 |
+
num_bytes: 74705
|
| 47 |
+
num_examples: 45
|
| 48 |
+
download_size: 124036
|
| 49 |
+
dataset_size: 330754
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
# Agent-Diff Bench
|
| 53 |
+
|
| 54 |
+
[**Website**](https://agentdiff.dev) | [**Paper**](https://huggingface.co/papers/2602.11224) | [**GitHub**](https://github.com/agent-diff-bench/agent-diff)
|
| 55 |
+
|
| 56 |
+
Agent-Diff is a benchmarking framework for evaluating agentic Large Language Models (LLMs) on real-world tasks that execute code via external APIs. The benchmark provides access to real API interfaces (Slack, Box, Linear, Google Calendar) while sandboxing the environment in which calls are made and evaluated.
|
| 57 |
+
|
| 58 |
+
## Dataset Summary
|
| 59 |
+
|
| 60 |
+
The dataset contains 224 tasks utilizing enterprise software workflows, provided with an 80/20 train/test split. It introduces a **state-diff contract**, which separates process from outcome — task success is defined as whether the expected change in environment state was achieved, rather than fuzzy trace or parameter matching.
|
| 61 |
+
|
| 62 |
+
- **Services**: Slack, Linear, Box, Google Calendar.
|
| 63 |
+
- **Evaluation**: State-diff based (comparing "before" and "after" snapshots of the sandboxed environment).
|
| 64 |
+
|
| 65 |
+
## Sample Usage
|
| 66 |
+
|
| 67 |
+
The following example demonstrates how to run evaluations using the `agent-diff` SDK as found in the [GitHub repository](https://github.com/agent-diff-bench/agent-diff):
|
| 68 |
+
|
| 69 |
+
```python
|
| 70 |
+
from agent_diff import AgentDiff, PythonExecutorProxy, create_openai_tool
|
| 71 |
+
from agents import Agent, Runner
|
| 72 |
+
|
| 73 |
+
client = AgentDiff()
|
| 74 |
+
|
| 75 |
+
# List test suites (e.g., "Slack Bench")
|
| 76 |
+
suite_list = client.list_test_suites(name="Slack Bench")
|
| 77 |
+
slack_suite = suite_list.testSuites[0]
|
| 78 |
+
suite = client.get_test_suite(slack_suite.id, expand=True)
|
| 79 |
+
|
| 80 |
+
for test in suite.tests:
|
| 81 |
+
prompt = test.prompt
|
| 82 |
+
test_id = test.id
|
| 83 |
+
|
| 84 |
+
# Initialise isolated environment
|
| 85 |
+
env = client.init_env(testId=test_id)
|
| 86 |
+
|
| 87 |
+
# Start the run (takes a snapshot before execution)
|
| 88 |
+
run = client.start_run(envId=env.environmentId, testId=test_id)
|
| 89 |
+
|
| 90 |
+
# Setup agent with proxied code execution tool
|
| 91 |
+
python_executor = PythonExecutorProxy(env.environmentId)
|
| 92 |
+
python_tool = create_openai_tool(python_executor)
|
| 93 |
+
|
| 94 |
+
agent = Agent(
|
| 95 |
+
name="Slack Assistant",
|
| 96 |
+
instructions="Use execute_python tool to interact with Slack API. Authentication is handled automatically.",
|
| 97 |
+
tools=[python_tool]
|
| 98 |
+
)
|
| 99 |
+
|
| 100 |
+
# Run the agent on the task
|
| 101 |
+
response = await Runner.run(agent, prompt)
|
| 102 |
+
|
| 103 |
+
# Compute evaluation based on state-diff
|
| 104 |
+
client.evaluate_run(runId=run.runId)
|
| 105 |
+
run_result = client.get_results_for_run(runId=run.runId)
|
| 106 |
+
|
| 107 |
+
print(f"Test: {test_id}, Score: {run_result.score}")
|
| 108 |
+
|
| 109 |
+
# Clean up
|
| 110 |
+
client.delete_env(envId=env.environmentId)
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Citation
|
| 114 |
+
|
| 115 |
+
```bibtex
|
| 116 |
+
@article{pysklo2025agentdiff,
|
| 117 |
+
title={Agent-Diff: Benchmarking LLM Agents on Enterprise API Tasks via Code Execution with State-Diff-Based Evaluation},
|
| 118 |
+
author={Hubert Marek Pysklo and others},
|
| 119 |
+
journal={arXiv preprint arXiv:2602.11224},
|
| 120 |
+
year={2025}
|
| 121 |
+
}
|
| 122 |
+
```
|