File size: 3,173 Bytes
d16f69a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
# ToolMisuseBench Scoring Protocol

This document defines the official scoring protocol for ToolMisuseBench.

## Evaluation Setup

Each task is executed as an episode:

1. environment reset with task initial state
2. agent receives observation (instruction, tools, transcript, remaining budget, last error)
3. agent proposes tool action
4. environment validates action, applies faults, executes tool call
5. process repeats until success or termination (budget/retry/agent stop)

## Core Per-Task Metrics

For each task:

- **TaskSuccess (0/1)**
  1 if final state satisfies `success_criteria`, else 0.

- **PolicyViolations (count)**
  Count of policy-related and schema-invalid behaviors (e.g., invalid arguments, authz-denied calls).

- **InvalidCallRate (float)**
  `invalid_calls / total_tool_calls`.

- **RecoverySuccess (0/1)**
  1 if task succeeds after at least one injected fault is encountered.

- **TimeToRecovery (int or null)**
  Steps from first observed injected fault to first subsequent successful tool call.

- **ToolCallsUsed (count)**
  Number of tool calls made in the episode.

- **BudgetExceeded (0/1)**
  1 if episode terminates due to `budget_exceeded` or `retry_exceeded`.

- **CatastrophicFailure (0/1)**
  1 if budget exceeded, invalid-call threshold exceeded, or task fails terminally.

- **PrimaryFault (string)**
  First fault type in task fault plan, or `clean` if no faults.

## Aggregate Metrics

Aggregate per agent is the mean over tasks for all scalar metrics, with:

- `TimeToRecovery`: mean over non-null values only

## Budgeted Success Curve and AUC

Compute success under tool-call caps:

- `k ∈ {4, 8, 16, 32}`
- success at cap `k` is the fraction of tasks where:
  - `TaskSuccess = 1`, and
  - `ToolCallsUsed <= k`

AUC is normalized trapezoidal area over the success-vs-budget curve.

## Official CLI Commands

### Evaluate one agent

```bash
toolmisusebench eval \
  --dataset data/toolmisusebench_v0_1 \
  --split test_public \
  --agent heuristic \
  --report out/report.json
```

### Reproduce experiment artifacts

```bash
toolmisusebench reproduce-paper \
  --config experiments/configs/v0_1.yaml
```

## Output Artifacts

Typical outputs:

- `report.json` (aggregate + per-task metrics + budgeted success)
- `report.traces.jsonl` (episode traces)
- experiment outputs:
  - `results.json`
  - `table1_overall.csv`
  - `table2_fault_breakdown.csv`
  - `figure1_budgeted_success.csv`
  - `figure2_time_to_recovery.csv`

## Quality and Coherence Validation

Dataset generation includes coherence checks ensuring each task is self-consistent:

- domain-state consistency
- valid tool references in criteria/fault plans
- structurally valid success criteria
- schema-drift argument validity checks

Generation fails if coherence checks report issues.

## Notes for Custom Agents

Custom agents can be evaluated via:

```bash
toolmisusebench eval \
  --dataset data/toolmisusebench_v0_1 \
  --split test_public \
  --agent-module your_pkg.your_agent:YourAgent \
  --agent-kwargs '{"param":"value"}' \
  --report out/custom_agent_report.json
```

Required interface:

- `reset() -> None`
- `act(observation) -> Action | None`