Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,196 +1,237 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
- text-generation
|
| 5 |
-
language:
|
| 6 |
-
- en
|
| 7 |
tags:
|
| 8 |
- benchmark
|
| 9 |
-
-
|
| 10 |
- cli
|
| 11 |
- tool-use
|
| 12 |
-
|
| 13 |
-
pretty_name: "CLI-Bench"
|
| 14 |
size_categories:
|
| 15 |
- n<1K
|
| 16 |
---
|
| 17 |
|
| 18 |
-
# CLI-Bench: Benchmarking AI Agents on
|
| 19 |
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
##
|
| 23 |
|
| 24 |
-
|
| 25 |
-
|---|---|
|
| 26 |
-
| **Tasks** | 40 |
|
| 27 |
-
| **Categories** | 6 (devops, project_mgmt, communication, data_ops, custom_cli, composite) |
|
| 28 |
-
| **Tool Adapters** | 12 (7 real-world + 5 fictional) |
|
| 29 |
-
| **Difficulty** | 20 easy, 10 medium, 10 hard |
|
| 30 |
-
| **Format** | YAML task definitions with declarative initial/expected state |
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
| 35 |
-
- **project_mgmt**: Issue tracking, sprint management, task coordination across platforms
|
| 36 |
-
- **communication**: Messaging, notifications, channel management via Slack and email
|
| 37 |
-
- **data_ops**: Data pipeline construction, ETL operations, report generation
|
| 38 |
-
- **custom_cli**: Tasks using fictional CLIs that cannot be memorized from training data
|
| 39 |
-
- **composite**: Multi-tool workflows requiring coordination across 2-3 tools in sequence
|
| 40 |
-
|
| 41 |
-
## Tool Adapters
|
| 42 |
-
|
| 43 |
-
### Real-World Tools (7)
|
| 44 |
-
| Tool | Domain |
|
| 45 |
-
|---|---|
|
| 46 |
-
| `gh` | GitHub CLI (issues, PRs, repos, actions) |
|
| 47 |
-
| `slack` | Slack CLI (messages, channels, users) |
|
| 48 |
-
| `linear` | Linear CLI (issues, projects, cycles) |
|
| 49 |
-
| `notion` | Notion CLI (pages, databases, blocks) |
|
| 50 |
-
| `google` | Google Workspace (Gmail, Calendar, Drive) |
|
| 51 |
-
| `jira` | Jira CLI (issues, sprints, boards) |
|
| 52 |
-
| `microsoft` | Microsoft 365 (Teams, Outlook, OneDrive) |
|
| 53 |
-
|
| 54 |
-
### Fictional Tools (5) — Memorization-Proof
|
| 55 |
-
| Tool | Domain |
|
| 56 |
-
|---|---|
|
| 57 |
-
| `kforge` | Artifact registry and deployment management |
|
| 58 |
-
| `flowctl` | Workflow engine with approval gates |
|
| 59 |
-
| `meshctl` | Service mesh topology and traffic control |
|
| 60 |
-
| `datapipe` | Declarative ETL pipeline builder |
|
| 61 |
-
| `alertmgr` | Alert routing, escalation, and incident management |
|
| 62 |
-
|
| 63 |
-
Fictional tools are designed so that agents **cannot rely on memorized CLI syntax** from pre-training. Agents must read the provided tool adapter specifications and reason about correct usage from first principles.
|
| 64 |
-
|
| 65 |
-
## Task Format
|
| 66 |
-
|
| 67 |
-
Each task is a YAML file containing:
|
| 68 |
|
| 69 |
-
|
| 70 |
-
id: cb-001
|
| 71 |
-
title: "List open issues in a GitHub repo"
|
| 72 |
-
difficulty: easy
|
| 73 |
-
category: project_mgmt
|
| 74 |
-
description: |
|
| 75 |
-
Natural language description of the task objective.
|
| 76 |
-
tools_provided:
|
| 77 |
-
- gh
|
| 78 |
-
initial_state:
|
| 79 |
-
gh:
|
| 80 |
-
repos:
|
| 81 |
-
acme-corp/web-platform:
|
| 82 |
-
issues:
|
| 83 |
-
- number: 42
|
| 84 |
-
title: "Fix login redirect loop"
|
| 85 |
-
state: open
|
| 86 |
-
assignee: alice
|
| 87 |
-
expected_state:
|
| 88 |
-
gh:
|
| 89 |
-
command_history:
|
| 90 |
-
- pattern: "gh issue list.*--repo acme-corp/web-platform.*--state open"
|
| 91 |
-
output_contains:
|
| 92 |
-
- "42"
|
| 93 |
-
scoring:
|
| 94 |
-
outcome: 0.6
|
| 95 |
-
efficiency: 0.2
|
| 96 |
-
recovery: 0.2
|
| 97 |
-
```
|
| 98 |
|
| 99 |
-
- **
|
| 100 |
-
- **
|
| 101 |
-
- **
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
-
##
|
| 104 |
|
| 105 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
|
| 107 |
-
|
| 108 |
-
|---|---|---|
|
| 109 |
-
| **Outcome** | 0.6 | Did the agent achieve the desired end state? Verified via declarative state assertions. |
|
| 110 |
-
| **Efficiency** | 0.2 | Did the agent use a reasonable number of commands? Penalizes excessive retries or unnecessary exploration. |
|
| 111 |
-
| **Recovery** | 0.2 | Did the agent handle errors or unexpected states gracefully? Tests resilience to failed commands and ambiguous outputs. |
|
| 112 |
|
| 113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
-
##
|
| 116 |
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
- **Hard (10 tasks)**: Multi-tool composite workflows requiring sequential orchestration, error recovery, and cross-tool state propagation.
|
| 120 |
|
| 121 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
|
| 123 |
-
##
|
| 124 |
|
| 125 |
```python
|
| 126 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
-
|
|
|
|
|
|
|
|
|
|
| 129 |
```
|
| 130 |
|
| 131 |
-
##
|
| 132 |
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
-
|
| 138 |
-
for task_file in sorted(Path("data/tasks").glob("cb-*.yaml")):
|
| 139 |
-
with open(task_file) as f:
|
| 140 |
-
tasks.append(yaml.safe_load(f))
|
| 141 |
|
| 142 |
-
|
| 143 |
-
print(f"Categories: {set(t['category'] for t in tasks)}")
|
| 144 |
-
```
|
| 145 |
|
| 146 |
-
|
| 147 |
|
| 148 |
-
|
| 149 |
-
import yaml
|
| 150 |
-
from pathlib import Path
|
| 151 |
|
| 152 |
-
|
| 153 |
-
for adapter_file in Path("tool_adapters").glob("*.yaml"):
|
| 154 |
-
with open(adapter_file) as f:
|
| 155 |
-
adapter = yaml.safe_load(f)
|
| 156 |
-
adapters[adapter_file.stem] = adapter
|
| 157 |
|
| 158 |
-
|
| 159 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 160 |
|
| 161 |
-
##
|
| 162 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 163 |
```
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
flowctl.yaml # Fictional: workflow engine
|
| 177 |
-
meshctl.yaml # Fictional: service mesh
|
| 178 |
-
datapipe.yaml # Fictional: ETL pipelines
|
| 179 |
-
alertmgr.yaml # Fictional: alert management
|
| 180 |
```
|
| 181 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 182 |
## Citation
|
| 183 |
|
|
|
|
|
|
|
| 184 |
```bibtex
|
| 185 |
@misc{cli-bench-2026,
|
| 186 |
title={CLI-Bench: Benchmarking AI Agents on Command-Line Tool Orchestration},
|
| 187 |
-
author={
|
| 188 |
year={2026},
|
| 189 |
-
url={https://github.com/minervacap2022/CLI-Bench}
|
| 190 |
}
|
| 191 |
```
|
| 192 |
|
| 193 |
-
##
|
| 194 |
|
| 195 |
-
|
| 196 |
-
- **License**: Apache 2.0
|
|
|
|
| 1 |
---
|
| 2 |
+
configs:
|
| 3 |
+
- config_name: default
|
| 4 |
+
data_files:
|
| 5 |
+
- split: test
|
| 6 |
+
path: data/tasks.jsonl
|
| 7 |
+
dataset_info:
|
| 8 |
+
features:
|
| 9 |
+
- name: id
|
| 10 |
+
dtype: string
|
| 11 |
+
- name: title
|
| 12 |
+
dtype: string
|
| 13 |
+
- name: difficulty
|
| 14 |
+
dtype: string
|
| 15 |
+
- name: category
|
| 16 |
+
dtype: string
|
| 17 |
+
- name: description
|
| 18 |
+
dtype: string
|
| 19 |
+
- name: tools_provided
|
| 20 |
+
dtype: string
|
| 21 |
+
- name: initial_state
|
| 22 |
+
dtype: string
|
| 23 |
+
- name: expected_state
|
| 24 |
+
dtype: string
|
| 25 |
+
- name: scoring
|
| 26 |
+
dtype: string
|
| 27 |
+
- name: max_turns
|
| 28 |
+
dtype: int64
|
| 29 |
+
- name: optimal_commands
|
| 30 |
+
dtype: int64
|
| 31 |
+
- name: timeout_seconds
|
| 32 |
+
dtype: int64
|
| 33 |
+
splits:
|
| 34 |
+
- name: test
|
| 35 |
+
num_examples: 40
|
| 36 |
license: apache-2.0
|
| 37 |
task_categories:
|
| 38 |
- text-generation
|
|
|
|
|
|
|
| 39 |
tags:
|
| 40 |
- benchmark
|
| 41 |
+
- agent-evaluation
|
| 42 |
- cli
|
| 43 |
- tool-use
|
| 44 |
+
pretty_name: CLI-Bench
|
|
|
|
| 45 |
size_categories:
|
| 46 |
- n<1K
|
| 47 |
---
|
| 48 |
|
| 49 |
+
# CLI-Bench: Benchmarking AI Agents on Command-Line Tool Orchestration
|
| 50 |
|
| 51 |
+
[](https://opensource.org/licenses/Apache-2.0)
|
| 52 |
+
[](https://www.python.org/downloads/)
|
| 53 |
+
[](https://huggingface.co/datasets/ChengyiX/CLI-Bench)
|
| 54 |
+
[](paper/main.pdf)
|
| 55 |
|
| 56 |
+
## Abstract
|
| 57 |
|
| 58 |
+
CLI-Bench is an evaluation benchmark for measuring AI agents' ability to learn and use command-line interface (CLI) tools to complete real-world tasks. Unlike existing benchmarks that test general coding ability or narrow tool-use scenarios, CLI-Bench evaluates **tool-agnostic CLI orchestration** -- the capacity to read tool documentation, plan multi-step workflows, execute commands, interpret outputs, recover from errors, and achieve desired end states across diverse service domains.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
+
The benchmark comprises 40 tasks spanning six categories (DevOps, project management, communication, data operations, custom CLI, and composite workflows) across 12 CLI tools. Tasks are grounded in stateful mock backends that simulate real services (GitHub, Slack, Linear, Notion, Google Workspace, Jira) with deterministic execution, enabling reproducible evaluation without live API dependencies. Each tool is defined via a declarative YAML adapter specification, making the benchmark trivially extensible to new tools.
|
| 61 |
|
| 62 |
+
A key contribution is the inclusion of **five fictional CLI tools** (kforge, flowctl, meshctl, datapipe, alertmgr) that no language model has encountered during training. These tools follow realistic CLI conventions but implement novel domain semantics, providing a memorization-proof evaluation of genuine tool-learning capability rather than pattern recall. Evaluation uses state-diffing against expected outcomes, efficiency measurement against optimal command counts, error recovery analysis, and a pass^k consistency metric adapted from tau-bench.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
+
## Key Features
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
+
- **Tool-agnostic via YAML adapters** -- Any CLI tool can be added by writing a YAML specification and a mock backend. No hardcoded tool knowledge in the harness.
|
| 67 |
+
- **Fictional tools for memorization-proof evaluation** -- Five novel CLI tools (kforge, flowctl, meshctl, datapipe, alertmgr) test genuine tool learning, not training data recall.
|
| 68 |
+
- **Multi-turn execution** -- Agents operate in a realistic loop: observe task and tool docs, issue commands, receive stdout/stderr, iterate until completion or timeout.
|
| 69 |
+
- **State-diffing evaluation** -- Scoring compares actual service state against expected state using deep recursive comparison with partial credit (0.0--1.0).
|
| 70 |
+
- **pass^k consistency metric** -- Measures reliability across k independent runs, not just peak performance. An agent must succeed on all k runs to score pass^k = 1.0.
|
| 71 |
+
- **Deterministic mock backends** -- All 7 service simulators (GitHub, Slack, Linear, Notion, Google, Jira, plus a generic fictional backend) are fully stateful and deterministic.
|
| 72 |
|
| 73 |
+
## Benchmark Statistics
|
| 74 |
|
| 75 |
+
| Dimension | Value |
|
| 76 |
+
|-----------|-------|
|
| 77 |
+
| Total tasks | 40 |
|
| 78 |
+
| Easy / Medium / Hard | 20 / 10 / 10 |
|
| 79 |
+
| Real-world CLI tools | 7 (gh, slack, linear, notion, google, jira, microsoft) |
|
| 80 |
+
| Fictional CLI tools | 5 (kforge, flowctl, meshctl, datapipe, alertmgr) |
|
| 81 |
+
| Task categories | 6 (devops, project_mgmt, communication, data_ops, custom_cli, composite) |
|
| 82 |
+
| Commands per tool | >= 5 |
|
| 83 |
+
| Max turns per task | 3--15 |
|
| 84 |
|
| 85 |
+
### Evaluation Metrics
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
+
| Metric | Description |
|
| 88 |
+
|--------|-------------|
|
| 89 |
+
| **Outcome** (default weight: 0.6) | State-diff score: fraction of expected state matched after execution |
|
| 90 |
+
| **Efficiency** (default weight: 0.2) | `min(1.0, optimal_commands / actual_commands)` |
|
| 91 |
+
| **Recovery** (default weight: 0.2) | 1.0 if errors encountered and recovered; 0.5 if no errors; 0.0 if errors with no recovery |
|
| 92 |
+
| **pass^k** | 1.0 if outcome >= 0.5 on all k runs, else 0.0. Measures consistency. |
|
| 93 |
|
| 94 |
+
## Installation
|
| 95 |
|
| 96 |
+
```bash
|
| 97 |
+
pip install git+https://github.com/minervacap2022/CLI-Bench.git
|
|
|
|
| 98 |
|
| 99 |
+
# Or clone and install in development mode
|
| 100 |
+
git clone https://github.com/minervacap2022/CLI-Bench.git
|
| 101 |
+
cd CLI-Bench
|
| 102 |
+
pip install -e ".[dev]"
|
| 103 |
+
```
|
| 104 |
|
| 105 |
+
## Quick Start
|
| 106 |
|
| 107 |
```python
|
| 108 |
+
import asyncio
|
| 109 |
+
from pathlib import Path
|
| 110 |
+
from cli_bench.agents.dummy import DummyAgent
|
| 111 |
+
from cli_bench.harness.benchmark import BenchmarkRunner
|
| 112 |
+
|
| 113 |
+
async def main():
|
| 114 |
+
agent = DummyAgent()
|
| 115 |
+
runner = BenchmarkRunner(
|
| 116 |
+
tasks_dir=Path("data/tasks"),
|
| 117 |
+
agent=agent,
|
| 118 |
+
k=1,
|
| 119 |
+
)
|
| 120 |
+
report = await runner.run_all()
|
| 121 |
+
print(f"Overall score: {report.overall_score:.3f}")
|
| 122 |
+
print(f"Pass^k: {report.overall_pass_k:.3f}")
|
| 123 |
+
|
| 124 |
+
asyncio.run(main())
|
| 125 |
+
```
|
| 126 |
|
| 127 |
+
Or via the CLI:
|
| 128 |
+
|
| 129 |
+
```bash
|
| 130 |
+
python scripts/run_benchmark.py --agent dummy --k 1
|
| 131 |
```
|
| 132 |
|
| 133 |
+
## Task Categories
|
| 134 |
|
| 135 |
+
| Category | Description | Example Task |
|
| 136 |
+
|----------|-------------|--------------|
|
| 137 |
+
| `devops` | CI/CD, deployment, infrastructure management | Trigger a deployment pipeline and verify status |
|
| 138 |
+
| `project_mgmt` | Issue tracking, sprint planning, team coordination | Create and assign issues across projects |
|
| 139 |
+
| `communication` | Messaging, notifications, search | Send targeted messages based on channel context |
|
| 140 |
+
| `data_ops` | ETL pipelines, data transformation, monitoring | Build a data pipeline from source to sink |
|
| 141 |
+
| `custom_cli` | Fictional tool operations (memorization-proof) | Manage artifacts in kforge registry |
|
| 142 |
+
| `composite` | Multi-tool workflows spanning categories | Create issue in Linear, notify team in Slack, schedule review in Calendar |
|
| 143 |
|
| 144 |
+
## Evaluation Metrics
|
|
|
|
|
|
|
|
|
|
| 145 |
|
| 146 |
+
### Outcome (State Diffing)
|
|
|
|
|
|
|
| 147 |
|
| 148 |
+
The primary metric compares the actual state of mock backends against the task's expected state using deep recursive comparison. Dict keys are checked individually with partial credit; list membership is verified order-independently. The resulting score is a float in [0.0, 1.0].
|
| 149 |
|
| 150 |
+
### Efficiency
|
|
|
|
|
|
|
| 151 |
|
| 152 |
+
Measures command economy: `min(1.0, optimal_commands / actual_commands)`. An agent that uses exactly the optimal number of commands scores 1.0; using twice as many scores 0.5.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
|
| 154 |
+
### Recovery
|
| 155 |
+
|
| 156 |
+
Evaluates error handling:
|
| 157 |
+
- **1.0**: Errors encountered during execution AND the agent successfully recovered (issued a successful command after the last error)
|
| 158 |
+
- **0.5**: No errors encountered (neutral baseline)
|
| 159 |
+
- **0.0**: Errors encountered but the agent failed to recover
|
| 160 |
|
| 161 |
+
### pass^k
|
| 162 |
|
| 163 |
+
Adapted from [tau-bench](https://github.com/sierra-research/tau-bench). Given k independent runs of the same task, pass^k = 1.0 only if **all** k runs achieve outcome >= 0.5. This measures consistency and reliability, not just peak performance.
|
| 164 |
+
|
| 165 |
+
## Adding Custom Tools
|
| 166 |
+
|
| 167 |
+
### 1. Write a YAML Tool Adapter
|
| 168 |
+
|
| 169 |
+
Create `cli_bench/tool_adapters/<tool_name>.yaml`:
|
| 170 |
+
|
| 171 |
+
```yaml
|
| 172 |
+
name: my-tool
|
| 173 |
+
description: "Description of the tool"
|
| 174 |
+
binary: mytool
|
| 175 |
+
auth:
|
| 176 |
+
type: env_var
|
| 177 |
+
key: MYTOOL_API_KEY
|
| 178 |
+
commands:
|
| 179 |
+
- name: resource list
|
| 180 |
+
description: "List all resources"
|
| 181 |
+
args:
|
| 182 |
+
- name: filter
|
| 183 |
+
type: string
|
| 184 |
+
required: false
|
| 185 |
+
description: "Filter expression"
|
| 186 |
+
output_format: json
|
| 187 |
+
side_effects: false
|
| 188 |
+
- name: resource create
|
| 189 |
+
description: "Create a new resource"
|
| 190 |
+
args:
|
| 191 |
+
- name: name
|
| 192 |
+
type: string
|
| 193 |
+
required: true
|
| 194 |
+
description: "Resource name"
|
| 195 |
+
output_format: json
|
| 196 |
+
side_effects: true
|
| 197 |
```
|
| 198 |
+
|
| 199 |
+
### 2. Implement a Mock Backend
|
| 200 |
+
|
| 201 |
+
For real tools, subclass `BaseMockBackend`. For fictional tools, use `FictionalMockBackend` which provides generic CRUD operations automatically:
|
| 202 |
+
|
| 203 |
+
```python
|
| 204 |
+
from cli_bench.mock_backends.fictional import FictionalMockBackend
|
| 205 |
+
|
| 206 |
+
backend = FictionalMockBackend(
|
| 207 |
+
initial_state={"resources": [{"id": "res-1", "name": "alpha"}]},
|
| 208 |
+
tool_name="mytool",
|
| 209 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
```
|
| 211 |
|
| 212 |
+
### 3. Write Tasks
|
| 213 |
+
|
| 214 |
+
Create task YAMLs in `data/tasks/` following the `BenchTask` schema (see existing tasks for examples).
|
| 215 |
+
|
| 216 |
+
## Leaderboard
|
| 217 |
+
|
| 218 |
+
Results and model comparisons are hosted on HuggingFace Spaces:
|
| 219 |
+
|
| 220 |
+
**[https://huggingface.co/datasets/ChengyiX/CLI-Bench](https://huggingface.co/datasets/ChengyiX/CLI-Bench)**
|
| 221 |
+
|
| 222 |
## Citation
|
| 223 |
|
| 224 |
+
If you use CLI-Bench in your research, please cite:
|
| 225 |
+
|
| 226 |
```bibtex
|
| 227 |
@misc{cli-bench-2026,
|
| 228 |
title={CLI-Bench: Benchmarking AI Agents on Command-Line Tool Orchestration},
|
| 229 |
+
author={{KLIK Team}},
|
| 230 |
year={2026},
|
| 231 |
+
url={https://github.com/minervacap2022/CLI-Bench},
|
| 232 |
}
|
| 233 |
```
|
| 234 |
|
| 235 |
+
## License
|
| 236 |
|
| 237 |
+
This project is licensed under the Apache License 2.0. See [LICENSE](LICENSE) for details.
|
|
|