Add paper link and task category
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,4 +1,13 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: task_id
|
|
@@ -32,18 +41,11 @@ configs:
|
|
| 32 |
path: data/multimodal-*
|
| 33 |
- split: multi_turn
|
| 34 |
path: data/multi_turn-*
|
| 35 |
-
language:
|
| 36 |
-
- en
|
| 37 |
-
- zh
|
| 38 |
-
license: mit
|
| 39 |
tags:
|
| 40 |
- agent-bench
|
| 41 |
- evaluation
|
| 42 |
- real-world
|
| 43 |
- multimodal
|
| 44 |
-
pretty_name: Claw-Eval
|
| 45 |
-
size_categories:
|
| 46 |
-
- n<1K
|
| 47 |
---
|
| 48 |
|
| 49 |
<div align="center">
|
|
@@ -59,7 +61,7 @@ size_categories:
|
|
| 59 |
|
| 60 |
**End-to-end transparent benchmark for AI agents acting in the real world.**
|
| 61 |
|
| 62 |
-
[Leaderboard](https://claw-eval.github.io) | [Code](https://github.com/claw-eval/claw-eval)
|
| 63 |
|
| 64 |
</div>
|
| 65 |
|
|
@@ -112,8 +114,8 @@ If you use Claw-Eval in your research, please cite:
|
|
| 112 |
|
| 113 |
```bibtex
|
| 114 |
@misc{claw-eval2026,
|
| 115 |
-
title={Claw-Eval:
|
| 116 |
-
author={Ye, Bowen and Li, Rang and Yang, Qibin and Xie, Zhihui and Liu, Yuanxin and Yao, Linli and Lyu, Hanglong and Li, Lei},
|
| 117 |
year={2026},
|
| 118 |
url={https://github.com/claw-eval/claw-eval}
|
| 119 |
}
|
|
@@ -131,4 +133,4 @@ We welcome any kind of contribution. Let us know if you have any suggestions!
|
|
| 131 |
|
| 132 |
## License
|
| 133 |
|
| 134 |
-
This dataset is released under the [MIT License](https://github.com/claw-eval/claw-eval/blob/main/LICENSE).
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
license: mit
|
| 6 |
+
size_categories:
|
| 7 |
+
- n<1K
|
| 8 |
+
task_categories:
|
| 9 |
+
- other
|
| 10 |
+
pretty_name: Claw-Eval
|
| 11 |
dataset_info:
|
| 12 |
features:
|
| 13 |
- name: task_id
|
|
|
|
| 41 |
path: data/multimodal-*
|
| 42 |
- split: multi_turn
|
| 43 |
path: data/multi_turn-*
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
tags:
|
| 45 |
- agent-bench
|
| 46 |
- evaluation
|
| 47 |
- real-world
|
| 48 |
- multimodal
|
|
|
|
|
|
|
|
|
|
| 49 |
---
|
| 50 |
|
| 51 |
<div align="center">
|
|
|
|
| 61 |
|
| 62 |
**End-to-end transparent benchmark for AI agents acting in the real world.**
|
| 63 |
|
| 64 |
+
[Paper](https://huggingface.co/papers/2604.06132) | [Leaderboard](https://claw-eval.github.io) | [Code](https://github.com/claw-eval/claw-eval)
|
| 65 |
|
| 66 |
</div>
|
| 67 |
|
|
|
|
| 114 |
|
| 115 |
```bibtex
|
| 116 |
@misc{claw-eval2026,
|
| 117 |
+
title={Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents},
|
| 118 |
+
author={Ye, Bowen and Li, Rang and Yang, Qibin and Xie, Zhihui and Liu, Yuanxin and Yao, Linli and Lyu, Hanglong and An, Chenxin and Li, Lei and Kong, Lingpeng and Liu, Qi and Sui, Zhifang and Yang, Tong},
|
| 119 |
year={2026},
|
| 120 |
url={https://github.com/claw-eval/claw-eval}
|
| 121 |
}
|
|
|
|
| 133 |
|
| 134 |
## License
|
| 135 |
|
| 136 |
+
This dataset is released under the [MIT License](https://github.com/claw-eval/claw-eval/blob/main/LICENSE).
|