File size: 6,441 Bytes
30dfdb2 c1ba69a 1d78aff c1ba69a 3ea9791 c1ba69a 10f0ce4 1d78aff 10f0ce4 8506436 de564fb 10f0ce4 1d78aff c1ba69a 30cea6f c1ba69a 1d78aff c1ba69a 10f0ce4 c1ba69a 6acaf28 c1ba69a 10f0ce4 c1ba69a 6acaf28 c0db625 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 | ---
license: mit
task_categories:
- visual-question-answering
- image-text-to-text
- question-answering
language:
- en
- zh
tags:
- agents
- benchmark
- evaluation
- openclaw
- multi-modal
size_categories:
- n<1K
---
<p align="center">
<img src="https://huggingface.co/datasets/internlm/WildClawBench/resolve/main/assets/lobster_battle.png" alt="WildClawBench Lobster" width="480">
</p>
<p align="center">
<b>Hard, practical, end-to-end evaluation for AI agents — in the wild.</b>
</p>
<div align="center">
[](https://internlm.github.io/WildClawBench/)
[](https://github.com/InternLM/WildClawBench)
[](https://huggingface.co/datasets/internlm/WildClawBench)
[]()
[]()
</div>
## 📌 Overview
**WildClawBench** is an agent benchmark that tests what actually matters: can an AI agent do real work, end-to-end, without hand-holding?
We drop agents into a live [OpenClaw](https://github.com/openclaw/openclaw) environment — the same open-source personal AI assistant that real users rely on daily — and throw **60 original tasks** at them: clipping goal highlights from a football match, negotiating meeting times over multi-round emails, hunting down contradictions in search results, writing inference scripts for undocumented codebases, catching privacy leaks before they happen. Useful things. Hard things.
Hard enough that **every frontier model today scores below 0.6**. That makes scores mean something.
## 📂 Repository Contents
This Hugging Face repository hosts the heavy assets required to run the benchmark:
* **`Images/wildclawbench-ubuntu_v1.2.tar`**: The official Docker image containing the isolated Ubuntu environment, OpenClaw instance, and all necessary tools (browser, bash, file system).
* **`workspace/`**: The task data directory containing initial and evaluation files for all 60 tasks.
## 📊 Benchmark Structure
The benchmark covers 6 categories across English and Chinese:
| Category | Tasks | Key Challenges |
|:---------|:---:|:---------------|
| **Productivity Flow** | 10 | Information synthesis, multi-source aggregation, and structured output. |
| **Code Intelligence** | 12 | Undocumented codebase comprehension and pixel-level visual reasoning. |
| **Social Interaction** | 6 | Multi-turn communication, API orchestration, and context tracking. |
| **Search & Retrieval** | 11 | Web search + local data reconciliation and source verification. |
| **Creative Synthesis** | 11 | Video/audio processing and cross-modal generation (e.g., match highlights). |
| **Safety Alignment** | 10 | Adversarial robustness, credential awareness, and harmful content refusal. |
### What Sets Us Apart
- **Real environment, not mocks.** Tasks run inside a live OpenClaw instance with real tools (browser, bash, file system, email, calendar).
- **60 original tasks, built by hand.** Not adapted from existing benchmarks — each task was designed from scratch to stress-test real-world agent capabilities.
- **Reproducible & isolated.** Each task runs in its own Docker container. Same image, same data, same grading code. Ground truth and grading scripts are injected only after the agent finishes — they are never visible during execution, eliminating data leakage. Scores are reproducible across machines.
## Quick Start
### Install Docker
<details>
<summary>macOS</summary>
```bash
brew install --cask docker
```
After installation, launch Docker Desktop from Applications or run:
```bash
open -a Docker
```
</details>
<details>
<summary>Ubuntu</summary>
```bash
# Install dependencies
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add apt repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
# Allow current user to run Docker without sudo
sudo usermod -aG docker $USER
newgrp docker
```
</details>
### Download Image
Download the Docker image tarball from [HuggingFace](https://huggingface.co/datasets/internlm/WildClawBench/blob/main/Images/wildclawbench-ubuntu_v1.2.tar):
```bash
pip install -U huggingface_hub
huggingface-cli download internlm/WildClawBench Images/wildclawbench-ubuntu_v1.2.tar --repo-type dataset --local-dir .
```
Then load the image:
```bash
docker load -i Images/wildclawbench-ubuntu_v1.2.tar
```
### Download Task Data
Download the task data from [HuggingFace](https://huggingface.co/datasets/internlm/WildClawBench/tree/main/workspace):
```bash
huggingface-cli download internlm/WildClawBench workspace --repo-type dataset --local-dir .
```
## Contributors
[Shuangrui Ding](https://mark12ding.github.io/)\* (Project Lead), [Xuanlang Dai](https://github.com/LennoxDai)\*, [Long Xing](https://github.com/Cooperx521)\*, [Shengyuan Ding](https://github.com/SYuan03), [Ziyu Liu](https://liuziyu77.github.io/), [Jingyi Yang](https://yjyddq.github.io/), [Penghui Yang](https://github.com/yph22), [Zhixiong Zhang](https://github.com/rookiexiong7), [Xilin Wei](https://github.com/wiselnn570)
Advisors: [Yubo Ma](https://mayubo2333.github.io/), [Haodong Duan](https://kennymckormick.github.io/), [Jing Shao](https://amandajshao.github.io/), [Jiaqi Wang](https://myownskyw7.github.io/), [Dahua Lin](http://dahualin.org/), [Kai Chen](https://chenkai.site/), [Yuhang Zang](https://yuhangzang.github.io/)
## Acknowledgements
WildClawBench builds on top of the excellent open-source agent ecosystem. We gratefully acknowledge the following projects:
- **[OpenClaw](https://github.com/openclaw/openclaw)**
- **[Claw-Eval](https://github.com/claw-eval/claw-eval)**
- **[PinchBench](https://github.com/pinchbench/skill)**
--- |