Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ tags:
|
|
| 13 |
- multi-modal
|
| 14 |
---
|
| 15 |
|
| 16 |
-
|
| 17 |
<p align="center">
|
| 18 |
<img src="https://huggingface.co/datasets/internlm/WildClawBench/resolve/main/assets/lobster_battle.png" alt="WildClawBench Lobster" width="480">
|
| 19 |
</p>
|
|
@@ -21,6 +21,9 @@ tags:
|
|
| 21 |
<b>Hard, practical, end-to-end evaluation for AI agents — in the wild.</b>
|
| 22 |
</p>
|
| 23 |
|
|
|
|
|
|
|
|
|
|
| 24 |
## 📌 Overview
|
| 25 |
|
| 26 |
**WildClawBench** is an agent benchmark designed to test real-world utility: can an AI agent perform complex work end-to-end without hand-holding? Agents are deployed into a live [OpenClaw](https://github.com/openclaw/openclaw) environment—a real-world personal AI assistant—to tackle **60 original tasks**.
|
|
@@ -32,7 +35,7 @@ These tasks are designed to be significantly more difficult than existing benchm
|
|
| 32 |
This Hugging Face repository hosts the heavy assets required to run the benchmark:
|
| 33 |
|
| 34 |
* **`Images/wildclawbench-ubuntu_v1.2.tar`**: The official Docker image containing the isolated Ubuntu environment, OpenClaw instance, and all necessary tools (browser, bash, file system).
|
| 35 |
-
* **`workspace/`**: The task data directory containing initial
|
| 36 |
|
| 37 |
## 📊 Benchmark Structure
|
| 38 |
|
|
|
|
| 13 |
- multi-modal
|
| 14 |
---
|
| 15 |
|
| 16 |
+
|
| 17 |
<p align="center">
|
| 18 |
<img src="https://huggingface.co/datasets/internlm/WildClawBench/resolve/main/assets/lobster_battle.png" alt="WildClawBench Lobster" width="480">
|
| 19 |
</p>
|
|
|
|
| 21 |
<b>Hard, practical, end-to-end evaluation for AI agents — in the wild.</b>
|
| 22 |
</p>
|
| 23 |
|
| 24 |
+
🏠<a href="">Github</a> | 🥇<a href="">Leaderboard</a> |📝<a href="">Blog</a> | 🤗<a href="">Image/Data preparation</a>
|
| 25 |
+
|
| 26 |
+
|
| 27 |
## 📌 Overview
|
| 28 |
|
| 29 |
**WildClawBench** is an agent benchmark designed to test real-world utility: can an AI agent perform complex work end-to-end without hand-holding? Agents are deployed into a live [OpenClaw](https://github.com/openclaw/openclaw) environment—a real-world personal AI assistant—to tackle **60 original tasks**.
|
|
|
|
| 35 |
This Hugging Face repository hosts the heavy assets required to run the benchmark:
|
| 36 |
|
| 37 |
* **`Images/wildclawbench-ubuntu_v1.2.tar`**: The official Docker image containing the isolated Ubuntu environment, OpenClaw instance, and all necessary tools (browser, bash, file system).
|
| 38 |
+
* **`workspace/`**: The task data directory containing initial and evaluation files for all 60 tasks.
|
| 39 |
|
| 40 |
## 📊 Benchmark Structure
|
| 41 |
|