Claw-Eval / README.md
nielsr's picture
nielsr HF Staff
Add paper link and task category
700d78d verified
|
raw
history blame
4.3 kB
metadata
language:
  - en
  - zh
license: mit
size_categories:
  - n<1K
task_categories:
  - other
pretty_name: Claw-Eval
dataset_info:
  features:
    - name: task_id
      dtype: string
    - name: query
      dtype: string
    - name: fixture
      list: string
    - name: language
      dtype: string
    - name: category
      dtype: string
  splits:
    - name: general
      num_bytes: 200118
      num_examples: 161
    - name: multimodal
      num_bytes: 72393
      num_examples: 101
    - name: multi_turn
      num_bytes: 50000
      num_examples: 38
  download_size: 155773
  dataset_size: 322511
configs:
  - config_name: default
    data_files:
      - split: general
        path: data/general-*
      - split: multimodal
        path: data/multimodal-*
      - split: multi_turn
        path: data/multi_turn-*
tags:
  - agent-bench
  - evaluation
  - real-world
  - multimodal

Claw-Eval

Claw-Eval Logo

Tasks Models Leaderboard License

End-to-end transparent benchmark for AI agents acting in the real world.

Paper | Leaderboard | Code


Dataset Structure

Splits

Split Examples Description
general 161 Core agent tasks across 24 categories (communication, finance, ops, productivity, etc.)
multimodal 101 Multimodal agentic tasks requiring perception and creation (webpage generation, video QA, document extraction, etc.)
multi_turn 38 Multi-turn conversational tasks where the agent interacts with a simulated user persona to clarify needs and provide advice

Fields

Field Type Description
task_id string Unique task identifier
query string Task instruction / description
fixture list[string] Fixture files required for the task (available in data/fixtures.tar.gz)
language string Task language (en or zh)
category string Task domain

Usage

from datasets import load_dataset

# Load all splits
dataset = load_dataset("claw-eval/Claw-Eval")

# Load a specific split
general = load_dataset("claw-eval/Claw-Eval", split="general")
multimodal = load_dataset("claw-eval/Claw-Eval", split="multimodal")
multi_turn = load_dataset("claw-eval/Claw-Eval", split="multi_turn")

# Inspect a sample
print(general[0])

Acknowledgements

Our test cases are built on the work of the community. We draw from and adapt tasks contributed by OpenClaw, PinchBench, OfficeQA, OneMillion-Bench, Finance Agent, and Terminal-Bench 2.0.

Citation

If you use Claw-Eval in your research, please cite:

@misc{claw-eval2026,
  title={Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents},
  author={Ye, Bowen and Li, Rang and Yang, Qibin and Xie, Zhihui and Liu, Yuanxin and Yao, Linli and Lyu, Hanglong and An, Chenxin and Li, Lei and Kong, Lingpeng and Liu, Qi and Sui, Zhifang and Yang, Tong},
  year={2026},
  url={https://github.com/claw-eval/claw-eval}
}

Contributors

Bowen Ye* (PKU), Rang Li* (PKU), Qibin Yang* (PKU), Zhihui Xie (HKU), Yuanxin Liu (PKU), Linli Yao (PKU), Hanglong Lyu (PKU), Lei Li (HKU, Project Lead)

Advisors: Tong Yang (PKU), Zhifang Sui (PKU), Lingpeng Kong (HKU), Qi Liu (HKU)

Contribution

We welcome any kind of contribution. Let us know if you have any suggestions!

License

This dataset is released under the MIT License.