Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
CLI-Bench: Benchmarking AI Agents on CLI Tool Orchestration
CLI-Bench is a benchmark for evaluating the ability of AI agents (e.g., LLM-based coding assistants) to use command-line interface tools to accomplish real-world developer tasks. Unlike existing benchmarks that focus on code generation or isolated API calls, CLI-Bench tests whether agents can orchestrate multiple CLI tools end-to-end across realistic workflows spanning project management, DevOps, communication, and data operations.
Overview
| Property | Value |
|---|---|
| Tasks | 40 |
| Categories | 6 (devops, project_mgmt, communication, data_ops, custom_cli, composite) |
| Tool Adapters | 12 (7 real-world + 5 fictional) |
| Difficulty | 20 easy, 10 medium, 10 hard |
| Format | YAML task definitions with declarative initial/expected state |
Task Categories
- devops: Infrastructure and deployment operations (CI/CD, monitoring, alerts)
- project_mgmt: Issue tracking, sprint management, task coordination across platforms
- communication: Messaging, notifications, channel management via Slack and email
- data_ops: Data pipeline construction, ETL operations, report generation
- custom_cli: Tasks using fictional CLIs that cannot be memorized from training data
- composite: Multi-tool workflows requiring coordination across 2-3 tools in sequence
Tool Adapters
Real-World Tools (7)
| Tool | Domain |
|---|---|
gh |
GitHub CLI (issues, PRs, repos, actions) |
slack |
Slack CLI (messages, channels, users) |
linear |
Linear CLI (issues, projects, cycles) |
notion |
Notion CLI (pages, databases, blocks) |
google |
Google Workspace (Gmail, Calendar, Drive) |
jira |
Jira CLI (issues, sprints, boards) |
microsoft |
Microsoft 365 (Teams, Outlook, OneDrive) |
Fictional Tools (5) — Memorization-Proof
| Tool | Domain |
|---|---|
kforge |
Artifact registry and deployment management |
flowctl |
Workflow engine with approval gates |
meshctl |
Service mesh topology and traffic control |
datapipe |
Declarative ETL pipeline builder |
alertmgr |
Alert routing, escalation, and incident management |
Fictional tools are designed so that agents cannot rely on memorized CLI syntax from pre-training. Agents must read the provided tool adapter specifications and reason about correct usage from first principles.
Task Format
Each task is a YAML file containing:
id: cb-001
title: "List open issues in a GitHub repo"
difficulty: easy
category: project_mgmt
description: |
Natural language description of the task objective.
tools_provided:
- gh
initial_state:
gh:
repos:
acme-corp/web-platform:
issues:
- number: 42
title: "Fix login redirect loop"
state: open
assignee: alice
expected_state:
gh:
command_history:
- pattern: "gh issue list.*--repo acme-corp/web-platform.*--state open"
output_contains:
- "42"
scoring:
outcome: 0.6
efficiency: 0.2
recovery: 0.2
- initial_state: The simulated environment state before the agent acts.
- expected_state: Declarative assertions on command patterns, state mutations, and expected outputs.
- scoring: Per-task weight overrides for the three evaluation dimensions.
Evaluation Metrics
CLI-Bench scores agents along three dimensions:
| Metric | Weight (default) | Description |
|---|---|---|
| Outcome | 0.6 | Did the agent achieve the desired end state? Verified via declarative state assertions. |
| Efficiency | 0.2 | Did the agent use a reasonable number of commands? Penalizes excessive retries or unnecessary exploration. |
| Recovery | 0.2 | Did the agent handle errors or unexpected states gracefully? Tests resilience to failed commands and ambiguous outputs. |
The aggregate score per task is a weighted sum. The benchmark also reports pass^k (the fraction of tasks solved within k attempts), providing a measure of reliability across repeated runs.
Difficulty Levels
- Easy (20 tasks): Single-tool, single-command operations with straightforward state assertions.
- Medium (10 tasks): Single-tool multi-step workflows or tasks requiring conditional logic.
- Hard (10 tasks): Multi-tool composite workflows requiring sequential orchestration, error recovery, and cross-tool state propagation.
Usage
With the datasets library
from datasets import load_dataset
dataset = load_dataset("ChengyiX/CLI-Bench")
Loading YAMLs directly
import yaml
from pathlib import Path
tasks = []
for task_file in sorted(Path("data/tasks").glob("cb-*.yaml")):
with open(task_file) as f:
tasks.append(yaml.safe_load(f))
print(f"Loaded {len(tasks)} tasks")
print(f"Categories: {set(t['category'] for t in tasks)}")
Loading tool adapter specifications
import yaml
from pathlib import Path
adapters = {}
for adapter_file in Path("tool_adapters").glob("*.yaml"):
with open(adapter_file) as f:
adapter = yaml.safe_load(f)
adapters[adapter_file.stem] = adapter
print(f"Loaded {len(adapters)} tool adapters")
Repository Structure
data/
metadata.yaml # Benchmark metadata and configuration
tasks/
cb-001.yaml # Individual task definitions
cb-002.yaml
...
cb-040.yaml
tool_adapters/
gh.yaml # GitHub CLI adapter spec
slack.yaml # Slack CLI adapter spec
...
kforge.yaml # Fictional: artifact management
flowctl.yaml # Fictional: workflow engine
meshctl.yaml # Fictional: service mesh
datapipe.yaml # Fictional: ETL pipelines
alertmgr.yaml # Fictional: alert management
Citation
@misc{cli-bench-2026,
title={CLI-Bench: Benchmarking AI Agents on Command-Line Tool Orchestration},
author={Chengyi Xu},
year={2026},
url={https://github.com/minervacap2022/CLI-Bench}
}
Links
- GitHub: https://github.com/minervacap2022/CLI-Bench
- License: Apache 2.0
- Downloads last month
- -