Datasets:
update readme
Browse files
README.md
CHANGED
|
@@ -127,6 +127,18 @@ Each example is a dictionary like:
|
|
| 127 |
|
| 128 |
You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.
|
| 129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
|
| 131 |
---
|
| 132 |
|
|
|
|
| 127 |
|
| 128 |
You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.
|
| 129 |
|
| 130 |
+
## 📊 Benchmark
|
| 131 |
+
|
| 132 |
+
We benchmarked a wide range of state-of-the-art models—including GPT-4o, Gemini, Claude, and several open-source LMMs—across all subsets. The results below have been updated after rerunning the evaluation. While they show minor variance compared to the results in the published paper, the conclusions remain unchanged.
|
| 133 |
+
|
| 134 |
+
### Spatial457 Evaluation Results (Transposed by Model)
|
| 135 |
+
|
| 136 |
+
| Model | L1_single | L2_objects | L3_2d_spatial | L4_occ | L4_pose | L5_6d_spatial | L5_collision |
|
| 137 |
+
|------------------------|-----------|------------|---------------|--------|---------|----------------|---------------|
|
| 138 |
+
| **GPT-4o** | 72.39 | 64.54 | 58.04 | 48.87 | 43.62 | 43.06 | 44.54 |
|
| 139 |
+
| **GeminiPro-1.5** | 69.40 | 66.73 | 55.12 | 51.41 | 44.50 | 43.11 | 44.73 |
|
| 140 |
+
| **Claude 3.5 Sonnet** | 61.04 | 59.20 | 55.20 | 40.49 | 41.38 | 38.81 | 46.27 |
|
| 141 |
+
| **Qwen2-VL-7B-Instruct** | 62.84 | 58.90 | 53.73 | 26.85 | 26.83 | 36.20 | 34.84 |
|
| 142 |
|
| 143 |
---
|
| 144 |
|