mdarahmanxAI commited on
Commit
6631fbc
·
verified ·
1 Parent(s): 649c099

CompToolBench v1.0: 200 tasks, 106 tools, 18 models

Browse files
Files changed (3) hide show
  1. README.md +217 -0
  2. test.jsonl +0 -0
  3. test.parquet +3 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: task_id
5
+ dtype: string
6
+ - name: level
7
+ dtype: string
8
+ - name: prompt
9
+ dtype: string
10
+ - name: available_tools
11
+ sequence: string
12
+ - name: expected_trace
13
+ dtype: string
14
+ - name: expected_final_answer
15
+ dtype: string
16
+ - name: num_steps
17
+ dtype: int32
18
+ - name: num_tools_offered
19
+ dtype: int32
20
+ - name: category
21
+ dtype: string
22
+ - name: pattern
23
+ dtype: string
24
+ splits:
25
+ - name: test
26
+ num_examples: 200
27
+ license: cc-by-4.0
28
+ task_categories:
29
+ - text-generation
30
+ - question-answering
31
+ language:
32
+ - en
33
+ tags:
34
+ - tool-use
35
+ - function-calling
36
+ - benchmark
37
+ - compositional
38
+ - llm-evaluation
39
+ - agents
40
+ - dag
41
+ - composition-gap
42
+ pretty_name: CompToolBench
43
+ size_categories:
44
+ - n<1K
45
+ ---
46
+
47
+ # CompToolBench: Measuring Compositional Tool-Use Generalization in LLMs
48
+
49
+ ## Dataset Summary
50
+
51
+ **CompToolBench** is a benchmark for evaluating how well large language models generalize from simple, single-tool calls to complex, multi-step compositional tool use. It contains **200 tasks** spanning four composition levels of increasing structural complexity, built on top of **106 deterministic tool simulators** covering 9 functional categories.
52
+
53
+ The key insight behind CompToolBench is the **composition gap**: models that can reliably call individual tools often fail dramatically when those same tools must be composed into chains, parallel fan-outs, or directed acyclic graphs (DAGs). CompToolBench quantifies this gap with fine-grained diagnostic metrics.
54
+
55
+ ### Key Features
56
+
57
+ - **4 composition levels**: single calls (L0), sequential chains (L1), parallel fan-outs (L2), and full DAGs with branching + merging (L3)
58
+ - **106 deterministic tool simulators**: no external API dependencies, fully reproducible
59
+ - **Fine-grained scoring**: tool selection accuracy, argument accuracy, data-flow correctness, completion rate
60
+ - **18-model leaderboard**: spanning cloud APIs (Mistral, Cohere, Groq, Cerebras, OpenRouter) and local models (Ollama)
61
+ - **Composition gap metric**: directly measures how much accuracy degrades as structural complexity increases
62
+
63
+ ## Dataset Structure
64
+
65
+ Each example in the dataset contains the following fields:
66
+
67
+ | Field | Type | Description |
68
+ |---|---|---|
69
+ | `task_id` | `string` | Unique identifier (e.g., `L0_node_0001`, `L3_dag_0153`) |
70
+ | `level` | `string` | Composition level: `L0_node`, `L1_chain`, `L2_parallel`, or `L3_dag` |
71
+ | `prompt` | `string` | Natural language instruction given to the model |
72
+ | `available_tools` | `list[string]` | Tool names provided to the model (includes distractors) |
73
+ | `expected_trace` | `object` | Ground-truth execution plan with steps, dependencies, and arguments |
74
+ | `expected_final_answer` | `string` | JSON-serialized expected output |
75
+ | `num_steps` | `int` | Number of tool calls in the expected trace |
76
+ | `num_tools_offered` | `int` | Number of tools offered (correct + distractors) |
77
+ | `category` | `string` | Functional category of the task |
78
+ | `pattern` | `string` | Composition pattern (e.g., `retrieve-transform`, `fan-out-compare`) |
79
+
80
+ ### Expected Trace Structure
81
+
82
+ Each step in `expected_trace.steps` contains:
83
+
84
+ - `step_id`: Step identifier (e.g., `step_1`)
85
+ - `tool_name`: Which tool to call
86
+ - `arguments`: JSON-serialized expected arguments
87
+ - `depends_on`: List of step IDs this step depends on (defines the DAG structure)
88
+ - `output_key`: Variable name for the step's output (used by downstream steps)
89
+
90
+ ## Composition Levels
91
+
92
+ | Level | Name | Description | Tasks | Avg Steps | Avg Tools Offered |
93
+ |---|---|---|---|---|---|
94
+ | **L0** | Single Node | One tool call, no composition | 48 | 1.0 | 4.0 |
95
+ | **L1** | Chain | Sequential pipeline (A -> B) | 64 | 2.0 | 5.0 |
96
+ | **L2** | Parallel | Independent fan-out (A \|\| B \|\| C) | 40 | 2.8 | 4.2 |
97
+ | **L3** | DAG | Full directed acyclic graph with branching and merging | 48 | 4.4 | 6.6 |
98
+
99
+ ### Task Categories
100
+
101
+ Tasks cover 9 functional categories: `chain`, `communication`, `computation`, `dag`, `external_services`, `information_retrieval`, `parallel`, `text_processing`, and `time_scheduling`.
102
+
103
+ ### Composition Patterns
104
+
105
+ Over 40 distinct composition patterns are represented, including `retrieve-transform`, `fan-out-compare`, `chain-fanout-merge-chain`, `parallel-merge-chain`, `true-dag-parallel-reads-merge`, and many more. See the paper for full details.
106
+
107
+ ## Leaderboard
108
+
109
+ Results from evaluating 18 models (10 cloud, 8 local) on all 200 tasks. Models are ranked by overall accuracy. All models achieve 100% tool *selection* accuracy (when they issue a call, they name the correct tool).
110
+
111
+ ### Cloud Models
112
+
113
+ | Model | Provider | L0 | L1 | L2 | L3 | Overall | Delta |
114
+ |---|---|---|---|---|---|---|---|
115
+ | Llama 3.1 8B | Groq | 27.1 | **75.8** | 87.1 | **76.0** | **66.4** | **-48.9** |
116
+ | Command A | Cohere | **45.8** | 62.7 | 87.8 | 40.8 | 58.4 | 5.1 |
117
+ | Mistral Small | Mistral | **45.8** | 59.7 | 87.6 | 40.9 | 57.5 | 4.9 |
118
+ | Command R+ | Cohere | 43.8 | 57.5 | **88.0** | 40.3 | 56.2 | 3.4 |
119
+ | Llama 3.1 8B | Cerebras | 31.2 | 66.1 | 81.2 | 46.4 | 56.0 | -15.1 |
120
+ | Mistral Large | Mistral | 39.6 | 59.5 | 87.9 | 38.5 | 55.4 | 1.1 |
121
+ | Mistral Medium | Mistral | 43.8 | 57.5 | 87.9 | 36.3 | 55.2 | 7.4 |
122
+ | Gemini 2.0 Flash | OpenRouter | 39.6 | 52.4 | 85.7 | 39.0 | 52.8 | 0.6 |
123
+ | GPT-OSS 120B | Cerebras | **45.8** | 56.3 | 56.1 | 29.0 | 47.2 | 16.8 |
124
+ | Llama 4 Scout 17B | Groq | 37.5 | 49.6 | 55.8 | 7.0 | 37.7 | 30.5 |
125
+
126
+ ### Local Models (Ollama)
127
+
128
+ | Model | Provider | L0 | L1 | L2 | L3 | Overall | Delta |
129
+ |---|---|---|---|---|---|---|---|
130
+ | Granite4 3B | Ollama | **45.8** | 57.3 | 56.1 | 30.2 | 47.8 | 15.6 |
131
+ | Granite4 1B | Ollama | 41.7 | 56.3 | 55.9 | 29.9 | 46.4 | 11.8 |
132
+ | Mistral 7B | Ollama | 43.8 | 57.7 | 49.2 | 30.5 | 46.1 | 13.3 |
133
+ | Llama 3.1 8B | Ollama | 39.6 | 56.7 | 56.1 | 29.5 | 45.9 | 10.1 |
134
+ | Mistral Nemo 12B | Ollama | 37.5 | 58.4 | 51.0 | 31.8 | 45.5 | 5.7 |
135
+ | Qwen 2.5 7B | Ollama | 39.6 | 56.7 | 53.8 | 25.8 | 44.6 | 13.8 |
136
+ | Mistral Small 24B | Ollama | 37.5 | 51.1 | 47.7 | 22.6 | 40.3 | 14.9 |
137
+ | Qwen3 8B | Ollama | 35.4 | 52.0 | 36.9 | 21.8 | 37.7 | 13.7 |
138
+
139
+ ### Aggregate Statistics
140
+
141
+ | Segment | L0 | L1 | L2 | L3 | Overall | Delta |
142
+ |---|---|---|---|---|---|---|
143
+ | *All models avg.* | 40.0 | 58.0 | 67.3 | 34.2 | 49.8 | 5.8 |
144
+ | *Cloud avg.* | 40.0 | 59.7 | 80.5 | 39.4 | 54.3 | 0.6 |
145
+ | *Local avg.* | 40.1 | 55.8 | 50.8 | 27.8 | 44.3 | 12.3 |
146
+
147
+ **Delta** = L0 accuracy minus L3 accuracy (positive means degradation at higher composition levels). Models marked with a dagger in the paper exhibit a *Selection Gap*, where L0 accuracy is lower than the average of L1-L3.
148
+
149
+ ## Usage
150
+
151
+ ### Loading the Dataset
152
+
153
+ ```python
154
+ from datasets import load_dataset
155
+
156
+ dataset = load_dataset("mdarahmanxAI/comptoolbench", split="test")
157
+
158
+ # Browse tasks by composition level
159
+ l3_tasks = dataset.filter(lambda x: x["level"] == "L3_dag")
160
+ print(f"L3 DAG tasks: {len(l3_tasks)}")
161
+ print(l3_tasks[0]["prompt"])
162
+ ```
163
+
164
+ ### Evaluating a Model
165
+
166
+ CompToolBench evaluates models by comparing their tool-call traces against the expected trace. The evaluation harness is available in the [GitHub repository](https://github.com/ronyrahmaan/comptoolbench).
167
+
168
+ ```python
169
+ import json
170
+
171
+ for task in dataset:
172
+ # 1. Build the tool-use prompt from task["prompt"] and task["available_tools"]
173
+ # 2. Send to your model with the tool schemas
174
+ # 3. Compare the model's tool calls against:
175
+ trace = json.loads(task["expected_trace"])
176
+ answer = json.loads(task["expected_final_answer"])
177
+
178
+ # Scoring dimensions:
179
+ # - Tool selection: did the model call the right tools?
180
+ # - Argument accuracy: were the arguments correct?
181
+ # - Data flow: did outputs flow correctly between steps?
182
+ # - Completion: did all required steps execute?
183
+ ```
184
+
185
+ ### Scoring Metrics
186
+
187
+ | Metric | Description |
188
+ |---|---|
189
+ | **Overall Accuracy** | Weighted combination of all sub-metrics |
190
+ | **Tool Selection** | Whether the model called the correct tool names |
191
+ | **Argument Accuracy** | Whether arguments matched expected values |
192
+ | **Data Flow Accuracy** | Whether inter-step data dependencies were satisfied |
193
+ | **Completion Rate** | Fraction of expected steps that were executed |
194
+ | **Composition Gap** | L0 accuracy minus Lk accuracy (measures degradation) |
195
+
196
+ ## Citation
197
+
198
+ If you use CompToolBench in your research, please cite:
199
+
200
+ ```bibtex
201
+ @article{rahmaan2026comptoolbench,
202
+ title={CompToolBench: Measuring Compositional Tool-Use Generalization in Large Language Models},
203
+ author={Rahmaan, Rony},
204
+ journal={arXiv preprint},
205
+ year={2026}
206
+ }
207
+ ```
208
+
209
+ ## License
210
+
211
+ This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license. You are free to share and adapt the dataset for any purpose, provided you give appropriate credit.
212
+
213
+ ## Links
214
+
215
+ - **Paper**: [arXiv (coming soon)]()
216
+ - **Code**: [GitHub](https://github.com/ronyrahmaan/comptoolbench)
217
+ - **Demo**: [HuggingFace Spaces](https://huggingface.co/spaces/mdarahmanxAI/comptoolbench-demo)
test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bf247509cb214467eee1b6ec4a92367727fcfbdfffd0047186eadc342ce1f0d
3
+ size 45118