Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -100,9 +100,20 @@ You can load the dataset directly using the Hugging Face 🤗 `datasets` library
|
|
| 100 |
```python
|
| 101 |
from datasets import load_dataset
|
| 102 |
|
| 103 |
-
dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
```
|
| 105 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
Each example is a dictionary like:
|
| 107 |
|
| 108 |
```python
|
|
@@ -116,17 +127,44 @@ Each example is a dictionary like:
|
|
| 116 |
}
|
| 117 |
```
|
| 118 |
|
| 119 |
-
### 🔹
|
| 120 |
|
| 121 |
```python
|
| 122 |
-
[
|
| 123 |
-
"L1_single",
|
| 124 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
]
|
| 126 |
```
|
| 127 |
|
| 128 |
You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.
|
| 129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
## 📊 Benchmark
|
| 131 |
|
| 132 |
We benchmarked a wide range of state-of-the-art models—including GPT-4o, Gemini, Claude, and several open-source LMMs—across all subsets. The results below have been updated after rerunning the evaluation. While they show minor variance compared to the results in the published paper, the conclusions remain unchanged.
|
|
|
|
| 100 |
```python
|
| 101 |
from datasets import load_dataset
|
| 102 |
|
| 103 |
+
# Load dataset from Hugging Face Hub
|
| 104 |
+
dataset = load_dataset(
|
| 105 |
+
"RyanWW/Spatial457",
|
| 106 |
+
name="L5_6d_spatial",
|
| 107 |
+
split="validation",
|
| 108 |
+
trust_remote_code=True # Required for custom loading script
|
| 109 |
+
)
|
| 110 |
```
|
| 111 |
|
| 112 |
+
**Important Notes:**
|
| 113 |
+
- ✅ Use `trust_remote_code=True` to enable the custom dataset loading script
|
| 114 |
+
- ❌ Do NOT use `data_dir` parameter when loading from Hugging Face Hub
|
| 115 |
+
- 📦 Images are automatically downloaded and cached
|
| 116 |
+
|
| 117 |
Each example is a dictionary like:
|
| 118 |
|
| 119 |
```python
|
|
|
|
| 127 |
}
|
| 128 |
```
|
| 129 |
|
| 130 |
+
### 🔹 Available Configurations
|
| 131 |
|
| 132 |
```python
|
| 133 |
+
configs = [
|
| 134 |
+
"L1_single", # Single object identification
|
| 135 |
+
"L2_objects", # Multi-object understanding
|
| 136 |
+
"L3_2d_spatial", # 2D spatial reasoning
|
| 137 |
+
"L4_occ", # Object occlusion
|
| 138 |
+
"L4_pose", # 3D pose estimation
|
| 139 |
+
"L5_6d_spatial", # 6D spatial reasoning
|
| 140 |
+
"L5_collision" # Collision detection
|
| 141 |
]
|
| 142 |
```
|
| 143 |
|
| 144 |
You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.
|
| 145 |
|
| 146 |
+
### 🔹 Example: Load and Use
|
| 147 |
+
|
| 148 |
+
```python
|
| 149 |
+
from datasets import load_dataset
|
| 150 |
+
|
| 151 |
+
# Load dataset
|
| 152 |
+
dataset = load_dataset(
|
| 153 |
+
"RyanWW/Spatial457",
|
| 154 |
+
name="L5_6d_spatial",
|
| 155 |
+
split="validation",
|
| 156 |
+
trust_remote_code=True
|
| 157 |
+
)
|
| 158 |
+
|
| 159 |
+
print(f"Number of examples: {len(dataset)}")
|
| 160 |
+
|
| 161 |
+
# Access first example
|
| 162 |
+
example = dataset[0]
|
| 163 |
+
print(f"Question: {example['question']}")
|
| 164 |
+
print(f"Answer: {example['answer']}")
|
| 165 |
+
print(f"Image size: {example['image'].size}")
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
## 📊 Benchmark
|
| 169 |
|
| 170 |
We benchmarked a wide range of state-of-the-art models—including GPT-4o, Gemini, Claude, and several open-source LMMs—across all subsets. The results below have been updated after rerunning the evaluation. While they show minor variance compared to the results in the published paper, the conclusions remain unchanged.
|