@context dict | @type string | name string | description string | conformsTo string | url string | license string | version string | datePublished timestamp[s] | creator dict | keywords list | isLiveDataset bool | distribution list | recordSet list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
{
"@language": "en",
"@vocab": "https://schema.org/",
"citeAs": "cr:citeAs",
"column": "cr:column",
"conformsTo": "dct:conformsTo",
"cr": "http://mlcommons.org/croissant/",
"data": {
"@id": "cr:data",
"@type": "@json"
},
"dataBiases": "cr:dataBiases",
"dataCollection": "cr:dataCollection",
"dataType": {
"@id": "cr:dataType",
"@type": "@vocab"
},
"dct": "http://purl.org/dc/terms/",
"extract": "cr:extract",
"field": "cr:field",
"fileProperty": "cr:fileProperty",
"fileSet": "cr:fileSet",
"format": "cr:format",
"includes": "cr:includes",
"isLiveDataset": "cr:isLiveDataset",
"jsonPath": "cr:jsonPath",
"key": "cr:key",
"md5": "cr:md5",
"parentField": "cr:parentField",
"path": "cr:path",
"personalSensitiveInformation": "cr:personalSensitiveInformation",
"recordSet": "cr:recordSet",
"references": "cr:references",
"regex": "cr:regex",
"repeated": "cr:repeated",
"replace": "cr:replace",
"sc": "https://schema.org/",
"separator": "cr:separator",
"source": "cr:source",
"subField": "cr:subField",
"transform": "cr:transform"
} | sc:Dataset | AgentDisruptBench | An evaluation methodology for evaluating AI agent resilience under runtime tool-call disruptions, designed for the NeurIPS Evaluations & Datasets Track. Contains 100 base tasks and variants across 4 domains (Retail, Travel, Finance, DevOps) with 20 disruption types and 9 disruption profiles. | http://mlcommons.org/croissant/1.0 | https://github.com/Kavirubc/AgentDisruptBench | https://opensource.org/licenses/MIT | 0.1.0 | 2026-04-03T00:00:00 | {
"@type": "sc:Organization",
"name": "AgentDisruptBench Contributors"
} | [
"AI agents",
"benchmarks",
"resilience",
"tool-calling",
"disruptions",
"evaluation",
"LLM",
"fault-injection",
"reliability"
] | false | [
{
"@type": "cr:FileObject",
"@id": "retail-tasks",
"name": "retail.yaml",
"description": "20 standard retail domain tasks (difficulty 1-5)",
"contentUrl": "python/agentdisruptbench/tasks/builtin/retail.yaml",
"encodingFormat": "application/x-yaml",
"sha256": "5c00f518f411c868ab8ee81c35c6... | [
{
"@type": "cr:RecordSet",
"@id": "tasks",
"name": "Benchmark Tasks",
"description": "The 100 benchmark tasks across 4 domains and 3 task types.",
"field": [
{
"@type": "cr:Field",
"@id": "tasks/task_id",
"name": "task_id",
"description": "Unique task identi... |
AgentDisruptBench
An evaluation methodology for measuring AI agent resilience under runtime tool-call disruptions.
Overview
AgentDisruptBench provides a systematic methodology, backed by 100 base tasks and variants across 4 domains, a systematic 20-type disruption taxonomy, and 9 disruption severity profiles, to study and measure how well LLM-based agents handle real-world tool failures.
Task Statistics
| Domain | Standard | Adversarial | Impossible | Handover | Total |
|---|---|---|---|---|---|
| Retail | 20 | 2 | 2 | 1 | 25 |
| Travel | 20 | 2 | 2 | 1 | 25 |
| Finance | 20 | 2 | 2 | 1 | 25 |
| DevOps | 20 | 2 | 2 | 1 | 25 |
| Total | 80 | 8 | 8 | 4 | 100 |
Disruption Taxonomy (20 Types)
| Category | Types |
|---|---|
| Timing | timeout, latency |
| HTTP Status | http_429, http_401, http_403, http_500, http_502, http_503 |
| Response Content | malformed_json, truncated, null_response, missing_fields, type_mismatch, schema_drift, wrong_data |
| Behavioral | intermittent, flapping, quota_exhausted, auth_expiry, cascading |
Key Metric: R(k, ε, λ) Reliability Surface
- k-consistency: Pass rate across repeated seeds (same task, same profile)
- ε-robustness: Pass rate across task-wording variants
- λ-fault-tolerance: Pass rate across disruption profiles
Files
tasks/retail.yaml— 20 retail domain taskstasks/travel.yaml— 20 travel domain taskstasks/finance.yaml— 20 finance domain taskstasks/devops.yaml— 20 DevOps domain taskstasks/adversarial.yaml— 8 adversarial trap taskstasks/impossible.yaml— 8 impossible taskstasks/handover.yaml— 4 handover taskstasks/variants.yaml— 18 ε-robustness task variantsprofiles/— 9 disruption profile definitions
Usage
pip install agentdisruptbench
from agentdisruptbench import TaskRegistry, DisruptionEngine
registry = TaskRegistry.from_builtin()
tasks = registry.filter(domain="retail", max_difficulty=3)
Citation
@inproceedings{agentdisruptbench2026,
title={AgentDisruptBench: An Evaluation Methodology for AI Agent Resilience Under Runtime Tool-Call Disruptions},
author={AgentDisruptBench Contributors},
year={2026}
}
License
MIT
- Downloads last month
- 22