Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -20,8 +20,70 @@ size_categories:
|
|
| 20 |
[](https://github.com/EmbodiedCity/BasicSpatialAbility.code)
|
| 21 |
[](https://huggingface.co/datasets/EmbodiedCity/BasicSpatialAbility)
|
| 22 |
|
| 23 |
-
This
|
| 24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial Orientation, Mental Rotation, and Spatial Visualization. Benchmarking 13 mainstream VLMs through nine validated psychometric experiments reveals significant gaps versus humans, with three key findings: 1) VLMs mirror human hierarchies (strongest in 2D orientation, weakest in 3D rotation) with independent BSAs; 2) Many smaller models surpass larger counterparts, with Qwen leading and InternVL2 lagging; 3) Interventions like CoT and few-shot training show limits from architectural constraints, while ToT demonstrates the most effective enhancement. Identified barriers include weak geometry encoding and missing dynamic simulation. By linking Psychometrics to VLMs, we provide a comprehensive BSA evaluation benchmark, a methodological perspective for embodied AI development, and a cognitive science-informed roadmap for achieving human-like spatial intelligence.
|
| 27 |
|
|
|
|
| 20 |
[](https://github.com/EmbodiedCity/BasicSpatialAbility.code)
|
| 21 |
[](https://huggingface.co/datasets/EmbodiedCity/BasicSpatialAbility)
|
| 22 |
|
| 23 |
+
This dataset is a benchmark designed for evaluating Multimodal Large Language Models' Basic Spatial Abilities based on authentic Psychometric theories. It is structured specifically to support both **Zero-shot** and **Few-shot** evaluation protocols.
|
| 24 |
|
| 25 |
+
## 📂 Dataset Structure (Important)
|
| 26 |
+
|
| 27 |
+
The dataset is organized into two distinct splits. **Please read this carefully to ensure valid evaluation results.**
|
| 28 |
+
|
| 29 |
+
| Split Name | Role | Description |
|
| 30 |
+
| :--- | :--- | :--- |
|
| 31 |
+
| **`test`** | **Query Set** | Contains the actual benchmark questions (images & queries) to be evaluated. <br>⚠️ **Evaluation Only.** Do not use for training or as few-shot examples. |
|
| 32 |
+
| **`validation`** | **Support Set** | Contains high-quality examples intended to be used as **Few-shot Prompts (In-Context Learning)**. <br>These samples should be prepended to the test queries to demonstrate the task to the model. |
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## ⚙️ Usage & Evaluation Protocol
|
| 37 |
+
|
| 38 |
+
You can load the dataset using the Hugging Face `datasets` library.
|
| 39 |
+
|
| 40 |
+
### 1. Zero-Shot Evaluation
|
| 41 |
+
**Logic:** Directly evaluate the model on the `test` split without any prior examples.
|
| 42 |
+
|
| 43 |
+
```python
|
| 44 |
+
from datasets import load_dataset
|
| 45 |
+
|
| 46 |
+
# Load the evaluation queries
|
| 47 |
+
test_dataset = load_dataset("EmbodiedCity/BasicSpatialAbility", split="test")
|
| 48 |
+
|
| 49 |
+
for sample in test_dataset:
|
| 50 |
+
image = sample['image']
|
| 51 |
+
question = sample['question']
|
| 52 |
+
# Model inference: P(answer | image, question)...
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
### 2. Few-Shot Evaluation
|
| 56 |
+
**Logic:** Use examples from the validation split as the context (demonstrations), followed by the query from the test split.
|
| 57 |
+
|
| 58 |
+
1. Load the validation split.
|
| 59 |
+
2. Format them into the prompt history.
|
| 60 |
+
3. Append the target question from the test split.
|
| 61 |
+
4.
|
| 62 |
+
```python
|
| 63 |
+
from datasets import load_dataset
|
| 64 |
+
|
| 65 |
+
# 1. Load the support set (demonstrations)
|
| 66 |
+
support_set = load_dataset("EmbodiedCity/BasicSpatialAbility", split="validation")
|
| 67 |
+
|
| 68 |
+
# 2. Load the query set (evaluation)
|
| 69 |
+
test_set = load_dataset("EmbodiedCity/BasicSpatialAbility", split="test")
|
| 70 |
+
|
| 71 |
+
# Pseudo-code for prompt construction
|
| 72 |
+
prompt_context = []
|
| 73 |
+
for ex in support_set:
|
| 74 |
+
prompt_context.append(f"User: {ex['question']}\nAssistant: {ex['answer']}")
|
| 75 |
+
|
| 76 |
+
# 3. Evaluate on Test Set
|
| 77 |
+
for sample in test_set:
|
| 78 |
+
# Combine context + current test question
|
| 79 |
+
final_prompt = prompt_context + [f"User: {sample['question']}"]
|
| 80 |
+
|
| 81 |
+
# Model inference...
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
## 🔬 Underlying Theory
|
| 87 |
|
| 88 |
The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial Orientation, Mental Rotation, and Spatial Visualization. Benchmarking 13 mainstream VLMs through nine validated psychometric experiments reveals significant gaps versus humans, with three key findings: 1) VLMs mirror human hierarchies (strongest in 2D orientation, weakest in 3D rotation) with independent BSAs; 2) Many smaller models surpass larger counterparts, with Qwen leading and InternVL2 lagging; 3) Interventions like CoT and few-shot training show limits from architectural constraints, while ToT demonstrates the most effective enhancement. Identified barriers include weak geometry encoding and missing dynamic simulation. By linking Psychometrics to VLMs, we provide a comprehensive BSA evaluation benchmark, a methodological perspective for embodied AI development, and a cognitive science-informed roadmap for achieving human-like spatial intelligence.
|
| 89 |
|