yuhangzang commited on
Commit
30cea6f
·
verified ·
1 Parent(s): 58b00c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -26,9 +26,12 @@ tags:
26
 
27
  ## 📌 Overview
28
 
29
- **WildClawBench** is an agent benchmark designed to test real-world utility: can an AI agent perform complex work end-to-end without hand-holding? Agents are deployed into a live [OpenClaw](https://github.com/openclaw/openclaw) environment—a real-world personal AI assistant—to tackle **60 original tasks**.
30
 
31
- These tasks are designed to be significantly more difficult than existing benchmarks; currently, every frontier model scores below **0.6**, ensuring the evaluation remains meaningful for next-generation agents.
 
 
 
 
32
 
33
  ## 📂 Repository Contents
34
 
 
26
 
27
  ## 📌 Overview
28
 
 
29
 
30
+ **WildClawBench** is an agent benchmark that tests what actually matters: can an AI agent do real work, end-to-end, without hand-holding?
31
+
32
+ We drop agents into a live [OpenClaw](https://github.com/openclaw/openclaw) environment — the same open-source personal AI assistant that real users rely on daily — and throw **60 original tasks** at them: clipping goal highlights from a football match, negotiating meeting times over multi-round emails, hunting down contradictions in search results, writing inference scripts for undocumented codebases, catching privacy leaks before they happen. Useful things. Hard things.
33
+
34
+ Hard enough that **every frontier model today scores below 0.6**. That makes scores mean something.
35
 
36
  ## 📂 Repository Contents
37