Claw-Eval / README.md
qiubi's picture
Added 3 contributors
51907bd
metadata
dataset_info:
  features:
    - name: task_id
      dtype: string
    - name: query
      dtype: string
    - name: fixture
      list: string
    - name: language
      dtype: string
    - name: category
      dtype: string
    - name: rubric
      dtype: large_string
  splits:
    - name: general
      num_bytes: 200118
      num_examples: 104
    - name: multimodal
      num_bytes: 72393
      num_examples: 35
  download_size: 155773
  dataset_size: 272511
configs:
  - config_name: default
    data_files:
      - split: general
        path: data/general-*
      - split: multimodal
        path: data/multimodal-*
language:
  - en
  - zh
license: mit
tags:
  - agent-bench
  - evaluation
  - real-world
  - multimodal
pretty_name: Claw-Eval
size_categories:
  - n<1K

Claw-Eval

Claw-Eval Logo

Tasks Models Leaderboard License

End-to-end transparent benchmark for AI agents acting in the real world.

Leaderboard | Code


Dataset Structure

Splits

Split Examples Description
general 104 Core agent tasks across 24 categories (communication, finance, ops, productivity, etc.)
multimodal 35 Multimodal agentic tasks requiring perception and creation (webpage generation, video QA, document extraction, etc.)

Fields

Field Type Description
task_id string Unique task identifier
query string Task instruction / description
fixture list[string] Fixture files required for the task (available in data/fixtures.tar.gz)
language string Task language (en or zh)
category string Task domain
rubric string Detailed evaluation criteria with weighted scoring

Usage

from datasets import load_dataset

# Load all splits
dataset = load_dataset("claw-eval/Claw-Eval")

# Load a specific split
general = load_dataset("claw-eval/Claw-Eval", split="general")
multimodal = load_dataset("claw-eval/Claw-Eval", split="multimodal")

# Inspect a sample
print(general[0])

Acknowledgements

Our test cases are built on the work of the community. We draw from and adapt tasks contributed by OpenClaw, PinchBench, OfficeQA, OneMillion-Bench, Finance Agent, and Terminal-Bench 2.0.

Citation

If you use Claw-Eval in your research, please cite:

@misc{claw-eval2026,
  title={Claw-Eval: End-to-End Transparent Benchmark for AI Agents in the Real World},
  author={Ye, Bowen and Li, Rang and Yang, Qibin and Xie, Zhihui and Liu, Yuanxin and Yao, Linli and Lyu, Hanglong and Li, Lei},
  year={2026},
  url={https://github.com/claw-eval/claw-eval}
}

Contributors

Bowen Ye* (PKU), Rang Li* (PKU), Qibin Yang* (PKU), Zhihui Xie (HKU), Yuanxin Liu (PKU), Linli Yao (PKU), Hanglong Lyu (PKU), Lei Li (HKU, Project Lead)

Advisors: Tong Yang (PKU), Zhifang Sui (PKU), Lingpeng Kong (HKU), Qi Liu (HKU)

Contribution

We welcome any kind of contribution. Let us know if you have any suggestions!

License

This dataset is released under the MIT License.