File size: 6,884 Bytes
c6bec71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
license: apache-2.0
task_categories:
  - text-generation
  - question-answering
  - other
language:
  - en
tags:
  - benchmark
  - leaderboard
  - agent-benchmark
  - llm-benchmark
  - web-agents
  - browser-agent
  - browser-automation
  - ai-agent
  - evaluation
  - real-world-tasks
  - web-navigation
  - task-completion
  - clawbench
  - multimodal
pretty_name: 'ClawBench: Web Agent Benchmark'
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/train-00000-of-00001.parquet
arxiv: "2604.08523"
viewer: true
leaderboard: NAIL-Group/clawbench-leaderboard
---

# ClawBench — A Benchmark for AI Web Agents

**Can AI Agents Complete Everyday Online Tasks?**

|[**💻 Github**](https://github.com/reacher-z/ClawBench) | [**🏆 Leaderboard**](https://huggingface.co/spaces/NAIL-Group/clawbench-leaderboard) | [**📖 Paper**](https://arxiv.org/abs/2604.08523) | [**🌐 Website**](https://claw-bench.com) |

ClawBench is an open **benchmark** for AI web agents — the systems that drive a real browser to complete a user's task end-to-end. It scores agents on real, everyday online tasks (booking flights, ordering groceries, submitting job applications) across live websites. The corpus ships in two slices: **V1 — 153 tasks across 144 websites** (the original frontier-model leaderboard) and **V2 — 130 newer tasks** (expanded coverage). For each run we capture **5 layers of behavioral data** (session replay, screenshots, HTTP traffic, agent reasoning traces, and browser actions), collect human ground-truth, and score with an agentic evaluator that provides step-level traceable diagnostics.

Install: `pip install clawbench-eval` ([PyPI](https://pypi.org/project/clawbench-eval/)) · Companion raw traces: [`NAIL-Group/ClawBenchV1Trace`](https://huggingface.co/datasets/NAIL-Group/ClawBenchV1Trace)


## 🏆 Leaderboard

Live results — pulled from [`leaderboard/results.csv`](https://huggingface.co/datasets/NAIL-Group/ClawBench/blob/main/leaderboard/results.csv) in this repo. Sort by corpus (v1 / v2 / all) and submit your model in the interactive Space:

[![Open the live ClawBench Leaderboard ↗](https://img.shields.io/badge/%F0%9F%8F%86%20Open%20the%20live%20Leaderboard-NAIL--Group%2Fclawbench--leaderboard-FFD21E?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/spaces/NAIL-Group/clawbench-leaderboard)

**Snapshot — last refreshed 2026-05-10:**

| Rank | Model | Harness | Corpus | Pass | Total | Pass Rate | Wall (h) |
|------|-------|---------|--------|------|-------|-----------|----------|
| 1 | `glm-5.1` | hermes | v2 | 63 | 130 | **48.46%** | 11.35 |
| 2 | `glm-5.1` | hermes | v1 | 25 | 153 | **16.34%** | 15.37 |
| 3 | `openrouter/owl-alpha` | hermes | v2 | 19 | 130 | **14.62%** | 7.58 |
| 4 | `deepseek/deepseek-v4-flash` | hermes | v1 | 14 | 153 | **9.15%** | 13.59 |
| 5 | `glm-5.1` | openclaw | v1 | 13 | 153 | **8.50%** | 6.50 |
| 6 | `deepseek/deepseek-v4-flash` | hermes | v2 | 4 | 130 | **3.08%** | 2.37 |
| 7 | `poolside/laguna-m.1:free` | hermes | v1 | 1 | 153 | **0.65%** | 1.52 |

**Submit a result** → run [`clawbench-eval`](https://pypi.org/project/clawbench-eval/) on your model and open a PR to [`leaderboard/results.csv`](https://huggingface.co/datasets/NAIL-Group/ClawBench/blob/main/leaderboard/results.csv) — one row per (model × harness × corpus).

> **Companion dataset (raw traces):** [`NAIL-Group/ClawBenchV1Trace`](https://huggingface.co/datasets/NAIL-Group/ClawBenchV1Trace) — `recording.mp4`, `requests.jsonl`, `actions.jsonl`, `agent-messages.jsonl`, `interception.json`, `run-meta.json` per V1 model run.

## Dataset Structure

### Columns

| Column | Type | Description |
|--------|------|-------------|
| `task_id` | int | Unique task identifier |
| `instruction` | string | Task prompt sent to the agent |
| `metaclass` | string | High-level category (21 categories) |
| `class` | string | Fine-grained sub-category |
| `platform` | string | Target platform (144 unique platforms) |
| `sites` | list[string] | Domains involved in the task |
| `eval_schema` | string (JSON) | Request interception configuration |
| `time_limit` | int | Maximum time in minutes |
| `extra_info` | string (JSON) | Paths to additional context files |
| `shared_info` | string | Path to shared user profile |

### Additional Files

```
shared/
  alex_green_personal_info.json   # Shared dummy user profile used across all tasks
extra_info/
  004/grocery_list.json           # Task-specific context (32 tasks have extra info)
  007/meal_plan.json
  043/pet_info.json
  ...
```

- **`shared/alex_green_personal_info.json`** — A comprehensive dummy user persona (Alex Green) including personal details, address, work history, education, financial information, and preferences. All tasks share this identity.
- **`extra_info/`** — Task-specific supplementary files referenced by the `extra_info` column. 32 of 153 tasks include additional context such as grocery lists, job links, meeting details, etc.

### eval_schema

The `eval_schema` field configures the **request interceptor** — a mechanism that blocks the final HTTP request matching the specified URL pattern and method, preventing irreversible actions (checkout, form submission, etc.) from reaching the server. This allows safe evaluation on live websites.

```json
{
  "url_pattern": "taskrabbit\\.(com|ca)/(api/v\\d+/jobs|book/\\d+/confirm)",
  "method": "POST"
}
```

## Task Categories (metaclass)

| Category | Tasks | Example Platforms |
|----------|-------|-------------------|
| daily-life | 21 | Uber Eats, Instacart, Zillow |
| entertainment-hobbies | 15 | Goodreads, Eventbrite, Fandango |
| creation-init | 13 | ClickUp, Typeform, Ghost |
| office-secretary-tasks | 9 | Trello, Calendly, Purelymail |
| rating-voting | 10 | TripAdvisor, Glassdoor, Yelp |
| education-learning | 9 | Coursera, LeetCode, Blinkist |
| travel | 9 | Google Flights, Hipcamp, Airbnb |
| beauty-personal-care | 9 | TaskRabbit, Booksy, Soko Glam |
| pet-animal-care | 8 | Rover, Petfinder, Chewy |
| job-search-hr | 8 | Indeed, Greenhouse, ZipRecruiter |
| academia-research | 5 | Zotero, Overleaf, Google Scholar |
| and 10 more... | | |

## Usage

```python
from datasets import load_dataset

ds = load_dataset("NAIL-Group/ClawBench", split="test")
print(ds[0])
```

## Citation

```bibtex
@article{zhang2026clawbench,
  title={ClawBench: Can AI Agents Complete Everyday Online Tasks?},
  author={Yuxuan Zhang and Yubo Wang and Yipeng Zhu and Penghui Du and Junwen Miao and Xuan Lu and Wendong Xu and Yunzhuo Hao and Songcheng Cai and Xiaochen Wang and Huaisong Zhang and Xian Wu and Yi Lu and Minyi Lei and Kai Zou and Huifeng Yin and Ping Nie and Liang Chen and Dongfu Jiang and Wenhu Chen and Kelsey R. Allen},
  journal={arXiv preprint arXiv:2604.08523},
  year={2026}
}
```