File size: 3,838 Bytes
b255a58
923e057
 
b255a58
923e057
 
b255a58
923e057
 
 
b255a58
923e057
 
 
 
b255a58
923e057
b255a58
 
 
 
923e057
 
 
b255a58
 
 
 
 
 
 
 
 
 
 
 
923e057
b255a58
 
 
6f5cce2
 
 
b255a58
 
 
 
 
b8a9aad
b255a58
 
 
 
923e057
 
 
 
 
 
 
 
 
 
 
 
 
 
b255a58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
923e057
b255a58
923e057
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
language:
- en
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- robotics
- visual-question-answering
- image-to-text
tags:
- spatial-reasoning
- 3d-scenes
- vision-language
- benchmark
---

# Theory of Space: Visual Scene Dataset

This dataset provides pre-rendered 3D multi-room environments for evaluating spatial reasoning in Vision Language Models (VLMs). It is designed to support the **Theory of Space (ToS)** benchmark, which tests whether foundation models can actively construct spatial beliefs through exploration.

**Paper**: [Theory of Space: Can Foundation Models Construct Spatial Beliefs through Active Exploration?](https://huggingface.co/papers/2602.07055)  
**Project Page**: [https://theory-of-space.github.io](https://theory-of-space.github.io)  
**GitHub Repository**: [https://github.com/mll-lab-nu/Theory-of-Space](https://github.com/mll-lab-nu/Theory-of-Space)

## Dataset Overview

| Property | Value |
|----------|-------|
| Rooms | 3 |
| Total Runs | 100 |
| Objects per Room | 4 |
| Includes False-Belief Data | Yes |

## Usage

### Download
Download via Hugging Face CLI:

```bash
# Add huggingface token (optional, avoid 429 rate limit)
# export HF_TOKEN=
hf download MLL-Lab/tos-data --repo-type dataset --local-dir room_data
```

Or use the ToS setup script which downloads automatically:

```bash
git clone --single-branch --branch release https://github.com/mll-lab-nu/Theory-of-Space.git
cd Theory-of-Space
source setup.sh
```

### Sample Usage
To run a full pipeline evaluation (explore + eval + cogmap) using the provided scripts:

```bash
python scripts/SpatialGym/spatial_run.py \
  --phase all \
  --model-name gpt-5.2 \
  --num 25 \
  --data-dir room_data/3-room/ \
  --output-root result/ \
  --render-mode vision,text \
  --exp-type active,passive \
  --inference-mode batch
```

## File Structure

```
tos-data/
└── runXX/                              # 100 runs (run00 - run99)
    ├── meta_data.json                  # Scene metadata (layout, objects, positions)
    ├── falsebelief_exp.json            # False-belief experiment data
    ├── top_down.png                    # Top-down view of the scene
    ├── top_down_annotated.png          # Annotated top-down view
    ├── top_down_fbexp.png              # Top-down view (false-belief state)
    ├── agent_facing_*.png              # Agent perspective images (north/south/east/west)
    ├── <object_id>_facing_*.png        # Object/door camera views
    ├── *_fbexp.png                     # False-belief experiment images
    └── top_down/
        └── img_0000.png                # Additional top-down renders
```

## File Descriptions

| File | Description |
|------|-------------|
| `meta_data.json` | Complete scene metadata including room layout, object positions, orientations, and connectivity |
| `falsebelief_exp.json` | Specifies object modifications (move/rotate) for belief update evaluation |
| `agent_facing_*.png` | Egocentric views from agent's perspective in 4 cardinal directions |
| `<object_id>_facing_*.png` | Views from each object/door position |
| `*_fbexp.png` | Images rendered after false-belief modifications |
| `top_down*.png` | Bird's-eye view for visualization and debugging |

## Citation

```bibtex
@inproceedings{zhang2026theoryofspace,
  title     = {Theory of Space: Can Foundation Models Construct Spatial Beliefs through Active Exploration?},
  author    = {Zhang, Pingyue and Huang, Zihan and Wang, Yue and Zhang, Jieyu and Xue, Letian and Wang, Zihan and Wang, Qineng and Chandrasegaran, Keshigeyan and Zhang, Ruohan and Choi, Yejin and Krishna, Ranjay and Wu, Jiajun and Li, Fei-Fei and Li, Manling},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year      = {2026},
}
```