kavirubc commited on
Commit
89f1d71
·
verified ·
1 Parent(s): b374b66

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. DATASHEET.md +1 -1
  2. README.md +4 -4
  3. croissant.json +1 -1
DATASHEET.md CHANGED
@@ -8,7 +8,7 @@ _Following the [Datasheets for Datasets](https://arxiv.org/abs/1803.09010) frame
8
 
9
  ### For what purpose was the dataset created?
10
 
11
- AgentDisruptBench was created to evaluate AI agent resilience under runtime tool-call disruptions. Existing benchmarks measure *whether* agents can use tools correctly, but assume tools behave perfectly — an unrealistic assumption in production environments where APIs time out, return malformed responses, enforce rate limits, and cascade failures. This benchmark fills the gap by providing structured tasks paired with configurable disruption profiles to measure agent reliability, recovery strategies, and graceful degradation.
12
 
13
  ### Who created the dataset and on behalf of which entity?
14
 
 
8
 
9
  ### For what purpose was the dataset created?
10
 
11
+ AgentDisruptBench was created to study **evaluation as a scientific object** in the context of AI agent resilience under runtime tool-call disruptions. Existing benchmarks measure *whether* agents can use tools correctly, but assume tools behave perfectly — an unrealistic assumption in production environments. This project provides an evaluation methodology, consisting of structured tasks paired with a Disruption Engine, to rigorously measure how we assess agent reliability, recovery strategies, and graceful degradation using the $R(k, \epsilon, \lambda)$ surface.
12
 
13
  ### Who created the dataset and on behalf of which entity?
14
 
README.md CHANGED
@@ -6,7 +6,7 @@ task_categories:
6
  - text-generation
7
  tags:
8
  - agents
9
- - benchmarks
10
  - tool-calling
11
  - resilience
12
  - disruption
@@ -20,11 +20,11 @@ size_categories:
20
 
21
  # AgentDisruptBench
22
 
23
- A benchmark for evaluating AI agent resilience under runtime tool-call disruptions.
24
 
25
  ## Overview
26
 
27
- AgentDisruptBench provides **100 benchmark tasks** across **4 domains** with a systematic **20-type disruption taxonomy** and **9 disruption severity profiles** to measure how well LLM-based agents handle real-world tool failures.
28
 
29
  ## Task Statistics
30
 
@@ -77,7 +77,7 @@ tasks = registry.filter(domain="retail", max_difficulty=3)
77
 
78
  ```bibtex
79
  @inproceedings{agentdisruptbench2026,
80
- title={AgentDisruptBench: Evaluating AI Agent Resilience Under Runtime Tool-Call Disruptions},
81
  author={AgentDisruptBench Contributors},
82
  year={2026}
83
  }
 
6
  - text-generation
7
  tags:
8
  - agents
9
+ - evaluation-methodology
10
  - tool-calling
11
  - resilience
12
  - disruption
 
20
 
21
  # AgentDisruptBench
22
 
23
+ An evaluation methodology for measuring AI agent resilience under runtime tool-call disruptions.
24
 
25
  ## Overview
26
 
27
+ AgentDisruptBench provides a systematic methodology, backed by **100 base tasks and variants** across **4 domains**, a systematic **20-type disruption taxonomy**, and **9 disruption severity profiles**, to study and measure how well LLM-based agents handle real-world tool failures.
28
 
29
  ## Task Statistics
30
 
 
77
 
78
  ```bibtex
79
  @inproceedings{agentdisruptbench2026,
80
+ title={AgentDisruptBench: An Evaluation Methodology for AI Agent Resilience Under Runtime Tool-Call Disruptions},
81
  author={AgentDisruptBench Contributors},
82
  year={2026}
83
  }
croissant.json CHANGED
@@ -43,7 +43,7 @@
43
  },
44
  "@type": "sc:Dataset",
45
  "name": "AgentDisruptBench",
46
- "description": "A benchmark for evaluating AI agent resilience under runtime tool-call disruptions. Contains 100 tasks across 4 domains (Retail, Travel, Finance, DevOps) with 20 disruption types and 9 disruption profiles.",
47
  "conformsTo": "http://mlcommons.org/croissant/1.0",
48
  "url": "https://github.com/Kavirubc/AgentDisruptBench",
49
  "license": "https://opensource.org/licenses/MIT",
 
43
  },
44
  "@type": "sc:Dataset",
45
  "name": "AgentDisruptBench",
46
+ "description": "An evaluation methodology for evaluating AI agent resilience under runtime tool-call disruptions, designed for the NeurIPS Evaluations & Datasets Track. Contains 100 base tasks and variants across 4 domains (Retail, Travel, Finance, DevOps) with 20 disruption types and 9 disruption profiles.",
47
  "conformsTo": "http://mlcommons.org/croissant/1.0",
48
  "url": "https://github.com/Kavirubc/AgentDisruptBench",
49
  "license": "https://opensource.org/licenses/MIT",