Update README.md
Browse files
README.md
CHANGED
|
@@ -19,8 +19,133 @@ configs:
|
|
| 19 |
path: 3_generalized_planning/cross_embodiment/single_arm/questions.json
|
| 20 |
---
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
-
|
| 25 |
|
| 26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
path: 3_generalized_planning/cross_embodiment/single_arm/questions.json
|
| 20 |
---
|
| 21 |
|
| 22 |
+
# π€ RoboBench: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models as Embodied Brain
|
| 23 |
|
| 24 |
+
<div align="center">
|
| 25 |
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+
[](https://arxiv.org/abs/2510.17801v1)
|
| 29 |
+
[](https://github.com/lyl750697268/RoboBench)
|
| 30 |
+
[](https://robo-bench.github.io/)
|
| 31 |
+
[](https://creativecommons.org/licenses/by/4.0/)
|
| 32 |
+
|
| 33 |
+
</div>
|
| 34 |
+
|
| 35 |
+
## π Overview
|
| 36 |
+
|
| 37 |
+
RoboBench is a comprehensive evaluation benchmark designed to assess the capabilities of Multimodal Large Language Models (MLLMs) in embodied intelligence tasks. This benchmark provides a systematic framework for evaluating how well these models can understand and reason about robotic scenarios.
|
| 38 |
+
|
| 39 |
+
## π― Key Features
|
| 40 |
+
|
| 41 |
+
- **π§ Comprehensive Evaluation**: Covers multiple aspects of embodied intelligence
|
| 42 |
+
- **π Rich Dataset**: Contains thousands of carefully curated examples
|
| 43 |
+
- **π¬ Scientific Rigor**: Designed with research-grade evaluation metrics
|
| 44 |
+
- **π Multimodal**: Supports text, images, and video data
|
| 45 |
+
- **π€ Robotics Focus**: Specifically tailored for robotic applications
|
| 46 |
+
|
| 47 |
+
## π Dataset Statistics
|
| 48 |
+
|
| 49 |
+
| Category | Count | Description |
|
| 50 |
+
|----------|-------|-------------|
|
| 51 |
+
| **Total Samples** | 6092 | Comprehensive evaluation dataset |
|
| 52 |
+
| **Image Samples** | 1400 | High-quality visual data |
|
| 53 |
+
| **Video Samples** | 3142 | Temporal & Planning reasoning examples |
|
| 54 |
+
|
| 55 |
+
## ποΈ Dataset Structure
|
| 56 |
+
|
| 57 |
+
```
|
| 58 |
+
RoboBench/
|
| 59 |
+
βββ 1_instruction_comprehension/ # Instruction understanding tasks
|
| 60 |
+
βββ 2_perception_reasoning/ # Visual perception and reasoning
|
| 61 |
+
βββ 3_generalized_planning/ # Cross-domain planning tasks
|
| 62 |
+
βββ 4_affordance_reasoning/ # Object affordance understanding
|
| 63 |
+
βββ 5_error_analysis/ # Error analysis and debugging
|
| 64 |
+
βββsystem_prompt.json. # Every task system prompts
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## π Quick Start
|
| 68 |
+
|
| 69 |
+
### Installation
|
| 70 |
+
|
| 71 |
+
```bash
|
| 72 |
+
# Clone the repository
|
| 73 |
+
git clone https://github.com/lyl750697268/RoboBench.git
|
| 74 |
+
cd RoboBench
|
| 75 |
+
|
| 76 |
+
# Install dependencies
|
| 77 |
+
pip install -r requirements.txt
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
### Basic Usage
|
| 81 |
+
|
| 82 |
+
```python
|
| 83 |
+
from datasets import load_dataset
|
| 84 |
+
|
| 85 |
+
# Load the dataset
|
| 86 |
+
dataset = load_dataset("your-username/RoboBench")
|
| 87 |
+
|
| 88 |
+
# Access different splits
|
| 89 |
+
train_data = dataset['train']
|
| 90 |
+
test_data = dataset['test']
|
| 91 |
+
|
| 92 |
+
# Example: Access a single sample
|
| 93 |
+
sample = train_data[0]
|
| 94 |
+
print(f"Question: {sample['question']}")
|
| 95 |
+
print(f"Answer: {sample['answer']}")
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
## π Evaluation Metrics
|
| 99 |
+
|
| 100 |
+
- **Accuracy**: Overall task completion rate
|
| 101 |
+
- **Perception Score**: Visual understanding capabilities
|
| 102 |
+
- **Reasoning Score**: Logical reasoning abilities
|
| 103 |
+
- **Planning Score**: Task planning effectiveness
|
| 104 |
+
|
| 105 |
+
## π¬ Research Applications
|
| 106 |
+
|
| 107 |
+
This benchmark is designed for researchers working on:
|
| 108 |
+
|
| 109 |
+
- **Multimodal Large Language Models**
|
| 110 |
+
- **Embodied AI Systems**
|
| 111 |
+
- **Robotic Intelligence**
|
| 112 |
+
- **Computer Vision**
|
| 113 |
+
- **Natural Language Processing**
|
| 114 |
+
|
| 115 |
+
## π Citation
|
| 116 |
+
|
| 117 |
+
If you use RoboBench in your research, please cite our paper:
|
| 118 |
+
|
| 119 |
+
```bibtex
|
| 120 |
+
@misc{luo2025robobenchcomprehensiveevaluationbenchmark,
|
| 121 |
+
title={Robobench: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models as Embodied Brain},
|
| 122 |
+
author={Yulin Luo and Chun-Kai Fan and Menghang Dong and Jiayu Shi and Mengdi Zhao and Bo-Wen Zhang and Cheng Chi and Jiaming Liu and Gaole Dai and Rongyu Zhang and Ruichuan An and Kun Wu and Zhengping Che and Shaoxuan Xie and Guocai Yao and Zhongxia Zhao and Pengwei Wang and Guang Liu and Zhongyuan Wang and Tiejun Huang and Shanghang Zhang},
|
| 123 |
+
year={2025},
|
| 124 |
+
eprint={2510.17801},
|
| 125 |
+
archivePrefix={arXiv},
|
| 126 |
+
primaryClass={cs.RO},
|
| 127 |
+
url={https://arxiv.org/abs/2510.17801},
|
| 128 |
+
}
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
## π€ Contributing
|
| 132 |
+
|
| 133 |
+
We welcome contributions! Please see our [Contributing Guidelines](https://github.com/lyl750697268/RoboBench) for more details.
|
| 134 |
+
|
| 135 |
+
## π License
|
| 136 |
+
|
| 137 |
+
This dataset is released under the [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
|
| 138 |
+
|
| 139 |
+
## π Links
|
| 140 |
+
|
| 141 |
+
- **π Paper**: [arXiv:2510.17801](https://arxiv.org/abs/2510.17801v1)
|
| 142 |
+
- **π Project Page**: [https://robo-bench.github.io/](https://robo-bench.github.io/)
|
| 143 |
+
- **π» GitHub**: [https://github.com/lyl750697268/RoboBench](https://github.com/lyl750697268/RoboBench)
|
| 144 |
+
|
| 145 |
+
---
|
| 146 |
+
|
| 147 |
+
<div align="center">
|
| 148 |
+
|
| 149 |
+
**Made with β€οΈ by the RoboBench Team**
|
| 150 |
+
|
| 151 |
+
</div>
|