Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,7 @@ The benchmark assesses model performance across **12 distinct embodied reasoning
|
|
| 40 |
| Total Samples | 600 |
|
| 41 |
| Question Types | 12 |
|
| 42 |
| Total Images | 668 |
|
| 43 |
-
| Answer Format | Multiple Choice (
|
| 44 |
|
| 45 |
### Question Type Distribution
|
| 46 |
|
|
@@ -59,110 +59,13 @@ The benchmark assesses model performance across **12 distinct embodied reasoning
|
|
| 59 |
| Direct Influence | 4 |
|
| 60 |
| Counterfactual | 2 |
|
| 61 |
|
| 62 |
-
### Data Fields
|
| 63 |
-
|
| 64 |
-
Each sample contains the following fields:
|
| 65 |
-
|
| 66 |
-
- **`id`** (int): Unique identifier for each sample
|
| 67 |
-
- **`question`** (str): The question text with multiple-choice options
|
| 68 |
-
- **`question_type`** (str): Category of the question (one of 12 types)
|
| 69 |
-
- **`answer`** (str): Correct answer letter (A, B, C, or D)
|
| 70 |
-
- **`num_images`** (int): Number of images associated with the question
|
| 71 |
-
- **`image_paths`** (list[str]): Relative paths to the associated images
|
| 72 |
-
|
| 73 |
-
### Question Type Descriptions
|
| 74 |
-
|
| 75 |
-
| Type | Description |
|
| 76 |
-
|------|-------------|
|
| 77 |
-
| **Trajectory Reasoning** | Predict the optimal path for robot end-effector movement |
|
| 78 |
-
| **Visual Grounding** | Locate specific objects or regions in the scene |
|
| 79 |
-
| **Process Verification** | Verify the correctness of a robotic action sequence |
|
| 80 |
-
| **Multiview Pointing** | Identify corresponding points across multiple camera views |
|
| 81 |
-
| **Relation Reasoning** | Understand spatial relationships between objects |
|
| 82 |
-
| **Robot Interaction** | Predict outcomes of robot-environment interactions |
|
| 83 |
-
| **Object State** | Recognize and reason about object states |
|
| 84 |
-
| **Episode Caption** | Describe robotic manipulation episodes |
|
| 85 |
-
| **Action Reasoning** | Reason about the effects of robot actions |
|
| 86 |
-
| **Task Planning** | Plan sequences of actions to achieve goals |
|
| 87 |
-
| **Direct Influence** | Understand direct causal effects in manipulation |
|
| 88 |
-
| **Counterfactual** | Reason about hypothetical alternative scenarios |
|
| 89 |
-
|
| 90 |
-
## Usage
|
| 91 |
-
|
| 92 |
-
### Loading the Dataset
|
| 93 |
-
|
| 94 |
-
```python
|
| 95 |
-
from datasets import load_dataset
|
| 96 |
-
|
| 97 |
-
# Load the dataset
|
| 98 |
-
dataset = load_dataset("IPEC-COMMUNITY/EO-Bench")
|
| 99 |
-
|
| 100 |
-
# Access samples
|
| 101 |
-
for sample in dataset['train']:
|
| 102 |
-
print(f"ID: {sample['id']}")
|
| 103 |
-
print(f"Question: {sample['question']}")
|
| 104 |
-
print(f"Type: {sample['question_type']}")
|
| 105 |
-
print(f"Answer: {sample['answer']}")
|
| 106 |
-
print(f"Images: {sample['image_paths']}")
|
| 107 |
-
break
|
| 108 |
-
```
|
| 109 |
-
|
| 110 |
-
### Loading Images
|
| 111 |
-
|
| 112 |
-
```python
|
| 113 |
-
from PIL import Image
|
| 114 |
-
from datasets import load_dataset
|
| 115 |
-
import os
|
| 116 |
-
|
| 117 |
-
dataset = load_dataset("IPEC-COMMUNITY/EO-Bench")
|
| 118 |
-
|
| 119 |
-
# Get the first sample
|
| 120 |
-
sample = dataset['train'][0]
|
| 121 |
-
|
| 122 |
-
# Load associated images
|
| 123 |
-
for img_path in sample['image_paths']:
|
| 124 |
-
# Images are stored in the 'images' folder
|
| 125 |
-
image = Image.open(hf_hub_download(
|
| 126 |
-
repo_id="IPEC-COMMUNITY/EO-Bench",
|
| 127 |
-
filename=img_path,
|
| 128 |
-
repo_type="dataset"
|
| 129 |
-
))
|
| 130 |
-
image.show()
|
| 131 |
-
```
|
| 132 |
-
|
| 133 |
-
### Evaluation Example
|
| 134 |
-
|
| 135 |
-
```python
|
| 136 |
-
from datasets import load_dataset
|
| 137 |
-
|
| 138 |
-
def evaluate_model(model, dataset):
|
| 139 |
-
correct = 0
|
| 140 |
-
total = 0
|
| 141 |
-
|
| 142 |
-
for sample in dataset['train']:
|
| 143 |
-
# Load images and prepare input
|
| 144 |
-
images = [load_image(p) for p in sample['image_paths']]
|
| 145 |
-
question = sample['question']
|
| 146 |
-
|
| 147 |
-
# Get model prediction
|
| 148 |
-
prediction = model.predict(images, question)
|
| 149 |
-
|
| 150 |
-
# Check if correct
|
| 151 |
-
if prediction == sample['answer']:
|
| 152 |
-
correct += 1
|
| 153 |
-
total += 1
|
| 154 |
-
|
| 155 |
-
accuracy = correct / total * 100
|
| 156 |
-
return accuracy
|
| 157 |
-
```
|
| 158 |
-
|
| 159 |
## Related Resources
|
| 160 |
|
| 161 |
### EO-1 Model
|
| 162 |
|
| 163 |
EO-1 is a unified embodied foundation model that processes interleaved vision-text-action inputs using a single decoder-only transformer architecture. The model achieves state-of-the-art performance on multimodal embodied reasoning tasks.
|
| 164 |
|
| 165 |
-
- **Paper**: [
|
| 166 |
- **GitHub**: [SHAILAB-IPEC/EO1](https://github.com/SHAILAB-IPEC/EO1)
|
| 167 |
- **Models**: [IPEC-COMMUNITY/EO-1-3B](https://huggingface.co/IPEC-COMMUNITY/EO-1-3B)
|
| 168 |
|
|
@@ -178,17 +81,10 @@ If you use this benchmark in your research, please cite:
|
|
| 178 |
|
| 179 |
```bibtex
|
| 180 |
@article{eo1,
|
| 181 |
-
title={
|
| 182 |
-
author={
|
| 183 |
-
journal={arXiv preprint
|
| 184 |
-
year={2025}
|
|
|
|
| 185 |
}
|
| 186 |
```
|
| 187 |
-
|
| 188 |
-
## License
|
| 189 |
-
|
| 190 |
-
This dataset is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
|
| 191 |
-
|
| 192 |
-
## Contact
|
| 193 |
-
|
| 194 |
-
For questions or issues, please open an issue on the [GitHub repository](https://github.com/SHAILAB-IPEC/EO1) or contact the EO-1 team.
|
|
|
|
| 40 |
| Total Samples | 600 |
|
| 41 |
| Question Types | 12 |
|
| 42 |
| Total Images | 668 |
|
| 43 |
+
| Answer Format | Multiple Choice (single or multiple correct options) |
|
| 44 |
|
| 45 |
### Question Type Distribution
|
| 46 |
|
|
|
|
| 59 |
| Direct Influence | 4 |
|
| 60 |
| Counterfactual | 2 |
|
| 61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
## Related Resources
|
| 63 |
|
| 64 |
### EO-1 Model
|
| 65 |
|
| 66 |
EO-1 is a unified embodied foundation model that processes interleaved vision-text-action inputs using a single decoder-only transformer architecture. The model achieves state-of-the-art performance on multimodal embodied reasoning tasks.
|
| 67 |
|
| 68 |
+
- **Paper**: [EO-1: Interleaved Vision-Text-Action Pretraining for General Robot Control](https://arxiv.org/abs/2508.21112)
|
| 69 |
- **GitHub**: [SHAILAB-IPEC/EO1](https://github.com/SHAILAB-IPEC/EO1)
|
| 70 |
- **Models**: [IPEC-COMMUNITY/EO-1-3B](https://huggingface.co/IPEC-COMMUNITY/EO-1-3B)
|
| 71 |
|
|
|
|
| 81 |
|
| 82 |
```bibtex
|
| 83 |
@article{eo1,
|
| 84 |
+
title={EO-1: Interleaved Vision-Text-Action Pretraining for General Robot Control},
|
| 85 |
+
author={Delin Qu and Haoming Song and Qizhi Chen and Zhaoqing Chen and Xianqiang Gao and Xinyi Ye and Qi Lv and Modi Shi and Guanghui Ren and Cheng Ruan and Maoqing Yao and Haoran Yang and Jiacheng Bao and Bin Zhao and Dong Wang},
|
| 86 |
+
journal={arXiv preprint},
|
| 87 |
+
year={2025},
|
| 88 |
+
url={https://arxiv.org/abs/2508.21112}
|
| 89 |
}
|
| 90 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|