Upload folder using huggingface_hub
Browse files- DATASHEET.md +1 -1
- README.md +4 -4
- croissant.json +1 -1
DATASHEET.md
CHANGED
|
@@ -8,7 +8,7 @@ _Following the [Datasheets for Datasets](https://arxiv.org/abs/1803.09010) frame
|
|
| 8 |
|
| 9 |
### For what purpose was the dataset created?
|
| 10 |
|
| 11 |
-
AgentDisruptBench was created to
|
| 12 |
|
| 13 |
### Who created the dataset and on behalf of which entity?
|
| 14 |
|
|
|
|
| 8 |
|
| 9 |
### For what purpose was the dataset created?
|
| 10 |
|
| 11 |
+
AgentDisruptBench was created to study **evaluation as a scientific object** in the context of AI agent resilience under runtime tool-call disruptions. Existing benchmarks measure *whether* agents can use tools correctly, but assume tools behave perfectly — an unrealistic assumption in production environments. This project provides an evaluation methodology, consisting of structured tasks paired with a Disruption Engine, to rigorously measure how we assess agent reliability, recovery strategies, and graceful degradation using the $R(k, \epsilon, \lambda)$ surface.
|
| 12 |
|
| 13 |
### Who created the dataset and on behalf of which entity?
|
| 14 |
|
README.md
CHANGED
|
@@ -6,7 +6,7 @@ task_categories:
|
|
| 6 |
- text-generation
|
| 7 |
tags:
|
| 8 |
- agents
|
| 9 |
-
-
|
| 10 |
- tool-calling
|
| 11 |
- resilience
|
| 12 |
- disruption
|
|
@@ -20,11 +20,11 @@ size_categories:
|
|
| 20 |
|
| 21 |
# AgentDisruptBench
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
## Overview
|
| 26 |
|
| 27 |
-
AgentDisruptBench provides **100
|
| 28 |
|
| 29 |
## Task Statistics
|
| 30 |
|
|
@@ -77,7 +77,7 @@ tasks = registry.filter(domain="retail", max_difficulty=3)
|
|
| 77 |
|
| 78 |
```bibtex
|
| 79 |
@inproceedings{agentdisruptbench2026,
|
| 80 |
-
title={AgentDisruptBench:
|
| 81 |
author={AgentDisruptBench Contributors},
|
| 82 |
year={2026}
|
| 83 |
}
|
|
|
|
| 6 |
- text-generation
|
| 7 |
tags:
|
| 8 |
- agents
|
| 9 |
+
- evaluation-methodology
|
| 10 |
- tool-calling
|
| 11 |
- resilience
|
| 12 |
- disruption
|
|
|
|
| 20 |
|
| 21 |
# AgentDisruptBench
|
| 22 |
|
| 23 |
+
An evaluation methodology for measuring AI agent resilience under runtime tool-call disruptions.
|
| 24 |
|
| 25 |
## Overview
|
| 26 |
|
| 27 |
+
AgentDisruptBench provides a systematic methodology, backed by **100 base tasks and variants** across **4 domains**, a systematic **20-type disruption taxonomy**, and **9 disruption severity profiles**, to study and measure how well LLM-based agents handle real-world tool failures.
|
| 28 |
|
| 29 |
## Task Statistics
|
| 30 |
|
|
|
|
| 77 |
|
| 78 |
```bibtex
|
| 79 |
@inproceedings{agentdisruptbench2026,
|
| 80 |
+
title={AgentDisruptBench: An Evaluation Methodology for AI Agent Resilience Under Runtime Tool-Call Disruptions},
|
| 81 |
author={AgentDisruptBench Contributors},
|
| 82 |
year={2026}
|
| 83 |
}
|
croissant.json
CHANGED
|
@@ -43,7 +43,7 @@
|
|
| 43 |
},
|
| 44 |
"@type": "sc:Dataset",
|
| 45 |
"name": "AgentDisruptBench",
|
| 46 |
-
"description": "
|
| 47 |
"conformsTo": "http://mlcommons.org/croissant/1.0",
|
| 48 |
"url": "https://github.com/Kavirubc/AgentDisruptBench",
|
| 49 |
"license": "https://opensource.org/licenses/MIT",
|
|
|
|
| 43 |
},
|
| 44 |
"@type": "sc:Dataset",
|
| 45 |
"name": "AgentDisruptBench",
|
| 46 |
+
"description": "An evaluation methodology for evaluating AI agent resilience under runtime tool-call disruptions, designed for the NeurIPS Evaluations & Datasets Track. Contains 100 base tasks and variants across 4 domains (Retail, Travel, Finance, DevOps) with 20 disruption types and 9 disruption profiles.",
|
| 47 |
"conformsTo": "http://mlcommons.org/croissant/1.0",
|
| 48 |
"url": "https://github.com/Kavirubc/AgentDisruptBench",
|
| 49 |
"license": "https://opensource.org/licenses/MIT",
|