File size: 3,882 Bytes
00f4c8f b37322a 00f4c8f 0af1258 85c2e99 6b3639b 4ed57bc 5c74fe9 0b8f51b e8e4ed1 337be19 e8e4ed1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
license: cc-by-4.0
task_categories:
- robotics
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: benchmark
data_files:
- split: single_arm
path: 3_generalized_planning/cross_embodiment/single_arm/questions.json
---
<p align="center">
<img src="https://robo-bench.github.io/static/images/log/R1.png" alt="RoboBench Logo" width="120"/>
</p>
<h1 align="center" style="font-size:2.5em;">RoboBench: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models as Embodied Brain</h1>
<div align="center">
[](https://arxiv.org/abs/2510.17801v1)
[](https://github.com/lyl750697268/RoboBench)
[](https://robo-bench.github.io/)
[](https://creativecommons.org/licenses/by/4.0/)
</div>
## π Overview
RoboBench is a comprehensive evaluation benchmark designed to assess the capabilities of Multimodal Large Language Models (MLLMs) in embodied intelligence tasks. This benchmark provides a systematic framework for evaluating how well these models can understand and reason about robotic scenarios.
## π― Key Features
- **π§ Comprehensive Evaluation**: Covers multiple aspects of embodied intelligence
- **π Rich Dataset**: Contains thousands of carefully curated examples
- **π¬ Scientific Rigor**: Designed with research-grade evaluation metrics
- **π Multimodal**: Supports text, images, and video data
- **π€ Robotics Focus**: Specifically tailored for robotic applications
## π Dataset Statistics
| Category | Count | Description |
|----------|-------|-------------|
| **Total Samples** | 6092 | Comprehensive evaluation dataset |
| **Image Samples** | 1400 | High-quality visual data |
| **Video Samples** | 3142 | Temporal & Planning reasoning examples |
## ποΈ Dataset Structure
```
RoboBench/
βββ 1_instruction_comprehension/ # Instruction understanding tasks
βββ 2_perception_reasoning/ # Visual perception and reasoning
βββ 3_generalized_planning/ # Cross-domain planning tasks
βββ 4_affordance_reasoning/ # Object affordance understanding
βββ 5_error_analysis/ # Error analysis and debugging
βββsystem_prompt.json. # Every task system prompts
```
## π¬ Research Applications
This benchmark is designed for researchers working on:
- **Multimodal Large Language Models**
- **Embodied AI Systems**
- **Robotic Intelligence**
- **Computer Vision**
- **Natural Language Processing**
## π Citation
If you use RoboBench in your research, please cite our paper:
```bibtex
@article{luo2025robobench,
title={Robobench: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models as Embodied Brain},
author={Luo, Yulin and Fan, Chun-Kai and Dong, Menghang and Shi, Jiayu and Zhao, Mengdi and Zhang, Bo-Wen and Chi, Cheng and Liu, Jiaming and Dai, Gaole and Zhang, Rongyu and others},
journal={arXiv preprint arXiv:2510.17801},
year={2025}
}
```
## π€ Contributing
We welcome contributions! Please see our [Contributing Guidelines](https://github.com/lyl750697268/RoboBench) for more details.
## π License
This dataset is released under the [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
## π Links
- **π Paper**: [arXiv:2510.17801](https://arxiv.org/abs/2510.17801v1)
- **π Project Page**: [https://robo-bench.github.io/](https://robo-bench.github.io/)
- **π» GitHub**: [https://github.com/lyl750697268/RoboBench](https://github.com/lyl750697268/RoboBench)
---
<div align="center">
**Made with β€οΈ by the RoboBench Team**
</div>
|