ToolMisuseBench Scoring Protocol
This document defines the official scoring protocol for ToolMisuseBench.
Evaluation Setup
Each task is executed as an episode:
- environment reset with task initial state
- agent receives observation (instruction, tools, transcript, remaining budget, last error)
- agent proposes tool action
- environment validates action, applies faults, executes tool call
- process repeats until success or termination (budget/retry/agent stop)
Core Per-Task Metrics
For each task:
TaskSuccess (0/1) 1 if final state satisfies
success_criteria, else 0.PolicyViolations (count) Count of policy-related and schema-invalid behaviors (e.g., invalid arguments, authz-denied calls).
InvalidCallRate (float)
invalid_calls / total_tool_calls.RecoverySuccess (0/1) 1 if task succeeds after at least one injected fault is encountered.
TimeToRecovery (int or null) Steps from first observed injected fault to first subsequent successful tool call.
ToolCallsUsed (count) Number of tool calls made in the episode.
BudgetExceeded (0/1) 1 if episode terminates due to
budget_exceededorretry_exceeded.CatastrophicFailure (0/1) 1 if budget exceeded, invalid-call threshold exceeded, or task fails terminally.
PrimaryFault (string) First fault type in task fault plan, or
cleanif no faults.
Aggregate Metrics
Aggregate per agent is the mean over tasks for all scalar metrics, with:
TimeToRecovery: mean over non-null values only
Budgeted Success Curve and AUC
Compute success under tool-call caps:
k ∈ {4, 8, 16, 32}- success at cap
kis the fraction of tasks where:TaskSuccess = 1, andToolCallsUsed <= k
AUC is normalized trapezoidal area over the success-vs-budget curve.
Official CLI Commands
Evaluate one agent
toolmisusebench eval \
--dataset data/toolmisusebench_v0_1 \
--split test_public \
--agent heuristic \
--report out/report.json
Reproduce experiment artifacts
toolmisusebench reproduce-paper \
--config experiments/configs/v0_1.yaml
Output Artifacts
Typical outputs:
report.json(aggregate + per-task metrics + budgeted success)report.traces.jsonl(episode traces)- experiment outputs:
results.jsontable1_overall.csvtable2_fault_breakdown.csvfigure1_budgeted_success.csvfigure2_time_to_recovery.csv
Quality and Coherence Validation
Dataset generation includes coherence checks ensuring each task is self-consistent:
- domain-state consistency
- valid tool references in criteria/fault plans
- structurally valid success criteria
- schema-drift argument validity checks
Generation fails if coherence checks report issues.
Notes for Custom Agents
Custom agents can be evaluated via:
toolmisusebench eval \
--dataset data/toolmisusebench_v0_1 \
--split test_public \
--agent-module your_pkg.your_agent:YourAgent \
--agent-kwargs '{"param":"value"}' \
--report out/custom_agent_report.json
Required interface:
reset() -> Noneact(observation) -> Action | None