File size: 4,509 Bytes
e539d9d 002edea e539d9d 561075f e539d9d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | ---
language:
- en
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
pretty_name: ToolMisuseBench
size_categories:
- 1K<n<10K
tags:
- benchmark
- tool-use
- agent-evaluation
- robustness
- recovery
- deterministic
---
# ToolMisuseBench
ToolMisuseBench is a deterministic, offline benchmark dataset for evaluating tool-using agents under realistic failure conditions, including schema misuse, execution failures, interface drift, and recovery under budget constraints.
This dataset is intended for reproducible evaluation of agent tool-use behavior, not for training a general-purpose language model.
## Dataset Summary
ToolMisuseBench evaluates whether an agent can:
- make valid tool calls under schema constraints
- recover after failures (timeouts, rate limits, authz, drift, adversarial errors)
- satisfy task goals under bounded tool-call/step/retry budgets
- minimize policy violations and invalid tool invocations
All tasks are synthetic and generated with deterministic seeds to ensure reproducibility.
## Repository and Evaluator
- Project repository (code + evaluator + baselines):
https://github.com/akgitrepos/toolmisusebench
- Recommended evaluation flow uses the project CLI and harness.
## Supported Evaluation Use Cases
- baseline benchmarking for tool-using agents
- robustness testing under controlled tool failures
- recovery-quality analysis after failure injection
- budgeted success tradeoff analysis (success vs tool-call cap)
## Data Structure
Dataset layout:
- `train/tasks.jsonl`
- `dev/tasks.jsonl`
- `test_public/tasks.jsonl`
- `manifest.json`
- `v0_1_freeze.json`
Each row in `tasks.jsonl` is a single benchmark task containing:
- `task_id`
- `split` (`train | dev | test_public`)
- `difficulty` (`easy | medium | hard`)
- `domain` (`crud | retrieval | files | scheduling | mixed`)
- `instruction`
- `toolset_id`
- `tool_schemas`
- `initial_state`
- `success_criteria`
- `budget` (`max_steps`, `max_tool_calls`, `max_retries`, `timeout_ms`)
- `fault_plan`
- `gold_summary` (optional)
- `seed`
## Dataset Size (v0.1 Release)
- Train: 5000
- Dev: 800
- Test Public: 1000
- Total: 6800
## Domains
- CRUD
- Retrieval
- Files
- Scheduling
## Fault Model
Supported fault types:
- `schema_drift`
- `rate_limit`
- `timeout`
- `authz`
- `adversarial_error`
Faults are declaratively specified per task and replayed deterministically.
## Viewer Note on Null Values
In the Hugging Face table viewer, nested fields inside `fault_plan.trigger` and `fault_plan.payload`
may appear as `null` for some rows.
This is expected: different fault types use different subsets of fields, and the viewer displays a
unified schema across all rows. A `null` value in this context typically means "not applicable for
this fault type," not missing or corrupted data.
## Data Generation
Generated synthetically using deterministic templates, seeded randomization, and task-level coherence checks.
Generation reference command:
```bash
toolmisusebench generate \
--version v0.1 \
--out data/toolmisusebench_v0_1 \
--seed 42 \
--size-profile large
```
Coherence and quality audit reference command:
```bash
python -m generator.quality_report \
--dataset data/toolmisusebench_v0_1 \
--splits train,dev,test_public
```
## Scoring and Evaluation
Use the official evaluator in the project repo.
Example:
```bash
toolmisusebench eval \
--dataset data/toolmisusebench_v0_1 \
--split test_public \
--agent heuristic \
--report out/report.json
```
For detailed metric definitions, see `SCORING.md` in this dataset repository.
## Reproducibility Notes
- deterministic generation and replay under fixed seeds
- per-task fault plans are deterministic
- checksums included in `manifest.json`
- freeze metadata included in `v0_1_freeze.json`
## Limitations
- synthetic tasks do not capture all real-world API/tool semantics
- benchmark is focused on controlled robustness comparisons, not full production realism
## Ethics and Privacy
- no personal data
- no proprietary user logs
- no sensitive external data sources used
## License
Dataset: CC-BY-4.0
Code/evaluator: MIT (see project repository)
## Citation
If you use ToolMisuseBench, please cite the project.
```bibtex
@misc{toolmisusebench2026,
title={ToolMisuseBench: A Deterministic Benchmark for Tool Misuse and Recovery in Agentic Systems},
author={ToolMisuseBench Authors},
year={2026},
howpublished={\url{https://github.com/akgitrepos/toolmisusebench}}
}
```
|