File size: 8,787 Bytes
dfccc02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b59d49
dfccc02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b59d49
dfccc02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
<div align="center">


# LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?

[Kexian Tang](https://scholar.google.com/citations?user=cXjomd8AAAAJ&hl=zh-CN&oi=ao)<sup>1,2\*</sup>,
[Junyao Gao](https://jeoyal.github.io/home/)<sup>1,2\*</sup>,
[Yanhong Zeng](https://zengyh1900.github.io)<sup>1†</sup>,
[Haodong Duan](https://kennymckormick.github.io/)<sup>1†</sup>,

[Yanan Sun](https://scholar.google.com/citations?user=6TA1oPkAAAAJ&hl=zh-CN&oi=ao)<sup>1</sup>,
[Zhening Xing](https://scholar.google.com/citations?hl=zh-CN&user=sVYO0GYAAAAJ)<sup>1</sup>,
[Wenran Liu](https://scholar.google.com/citations?hl=zh-CN&user=fwKOaD8AAAAJ)<sup>1</sup>,
[Kaifeng Lyu](https://kaifeng.ac/cn/)<sup>3‡</sup>,
[Kai Chen](https://chenkai.site/)<sup>1‡</sup>

<sub><sup>1</sup>Shanghai AI Laboratory <sup>2</sup>Tongji University <sup>3</sup>Tsinghua University</sub>  
<sub><sup>\*</sup>Equal contribution. <sup></sup>Project Leads. <sup></sup>Corresponding Authors.</sub>  



<p align="center">
  <a href='https://tangkexian.github.io/LEGO-Puzzles/'>
    <img src='https://img.shields.io/badge/Project-Page-Green'>
  </a>
  <a href='https://arxiv.org/abs/2503.19990'>
    <img src='https://img.shields.io/badge/Paper-2503.19990-brown?style=flat&logo=arXiv' alt='arXiv PDF'>
  </a>
  <a href='https://opencompass.openxlab.space/utils/VLMEval/LEGO.tsv'>
    <img src='https://img.shields.io/badge/Data-tsv-blue?style=flat&logo=liquibase' alt='data img/data'>
  </a>



</p>
</div>

<div align="center">
  <img src='https://tangkexian.github.io/LEGO-Puzzles/static/images/teaser.png' width="100%">
</div>


## 🎉 News

- **\[2025/04/08\]** The benchmark and evaluation code have been released! Have fun 😃 .
- **\[2025/03/25\]** The paper is released.

## 📖 Introduction

In this work, we introduce **LEGO-Puzzles**, a scalable and systematic benchmark designed to evaluate Multi-step Spatial Reasoning in Multimodal Large Language Models (MLLMs). Inspired by how humans develop spatial cognition through construction, LEGO-Puzzles frames spatial understanding as a series of LEGO assembly tasks that challenge both visual perception and sequential reasoning.

To comprehensively assess spatial reasoning capabilities, LEGO-Puzzles is structured into three core task categories: **Spatial Understanding**, **Single-Step Sequential Reasoning**, and **Multi-Step Sequential Reasoning**. Each task requires models to understand visual inputs, perform step-by-step logical deduction, and maintain spatial consistency across sequences.

Furthermore, based on LEGO-Puzzzles, we design **image generation tasks** to investigate whether MLLMs can transfer their spatial understanding and reasoning abilities to image generation.

We further introduce **LEGO-Puzzles-Lite**, a distilled subset tailored for human-model comparison, and a fine-grained evaluation suite named **Next-k-Step** to test reasoning scalability under increasing complexity.

Despite recent advances in multimodal modeling, our experiments reveal that current state-of-the-art MLLMs—while impressive—fall significantly short of human-level spatial reasoning, especially in multi-step and generative tasks.

**LEGO-Puzzles aims to establish a rigorous testbed for benchmarking spatial reasoning in MLLMs and to motivate the development of more spatially-aware multimodal systems.**

## 🔍 Dataset & Task Design

LEGO-Puzzles consists of **1,100 curated samples across 11 task types**, evenly covering:

- 🧩 **Spatial Understanding** (36.4%)  
- 🔁 **Single-Step Sequential Reasoning** (36.4%)  
- 🧠 **Multi-Step Sequential Reasoning** (27.3%)

Each task is framed as a visual question-answering problem or a generation prompt grounded in realistic LEGO configurations, enabling precise and interpretable evaluation.

<div align="center">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/statistic.png" width="100%">
</div>


## 🧪 Main Evaluation Results

We evaluate **20 cutting-edge MLLMs**, spanning both open-source and proprietary models. While GPT-4o and Gemini-2.0-Flash lead overall, their performance still trails behind human annotators, especially in tasks requiring 3D spatial alignment, rotation handling, and multi-step assembly tracking.

<div align="center">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/main_results.png" width="100%">
</div>


## 👤 Human vs Model Performance

To highlight the human-model performance gap, we compare top MLLMs against human annotators on **LEGO-Puzzles-Lite** (220 samples). Humans consistently outperform MLLMs by a wide margin, reaffirming the challenges of spatial reasoning in current AI systems.

<div align="center">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/LEGO-Puzzles-Lite_results.png" width="100%">
</div>


## 🎨 Image Generation Evaluation

We design **5 LEGO-based image generation tasks** testing a model's ability to simulate spatial transformations. Models must generate intermediate assembly states based on instructions. Human evaluators assess the output across two axes:

- 🎯 **Appearance Similarity**
- 🎯 **Instruction Following**

Only **GPT-4o** and **Gemini-2.0-Flash** demonstrate partial success, while open-source models generally fail to produce structurally valid or instruction-aligned images. We evaluate GPT-4o, Gemini-2.0-Flash, GPT-4o* (referring to the version released prior to March 6, 2025), Emu2, GILL, and Anole using a scoring scale from 0 to 3 for both ***Appearance*** and ***Instruction Following*** dimensions.

<div align="center">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/Generation_results.png" width="100%">
</div>


## 🧠 Multi-Step Reasoning with Next-k-Step

We propose **Next-k-Step**, a fine-grained reasoning benchmark that challenges models to predict assembly states after *k* sequential steps. We analyze model performance under varying values of *k*, both with and without **Chain-of-Thought (CoT)** prompting. Results suggest CoT does not robustly enhance multi-step spatial reasoning.

<div align="center">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/next-k-step_results.png" width="100%">
</div>


## 🧷 Qualitative Samples

A few representative examples from LEGO-Puzzles are shown below, illustrating the diversity and complexity of the benchmark.

<div align="center">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/rotation_case.png" width="100%">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/multiview_case.png" width="100%">
  <img src="https://tangkexian.github.io/LEGO-Puzzles/static/images/dependency_case.png" width="100%">
</div>



## 🛠️ Quick Start

We have fully integrated **LEGO-Puzzles** into [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), a unified framework for benchmarking VLMs. You can easily evaluate your favorite multimodal models on LEGO-Puzzles with just a single command!

### Step 0. Installation

```bash
git clone https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .
```

### Step 1. Setup API Keys (Optional)

If you want to evaluate API-based models like GPT-4o, Gemini-Pro-V, etc., or use a LLM judge, configure the required keys in a .env file or export them as environment variables:

```bash
# Example .env (place it in VLMEvalKit root directory)
OPENAI_API_KEY=your-openai-key
GOOGLE_API_KEY=your-google-api-key
# ...other optional keys
```

If no key is provided, VLMEvalKit defaults to exact-match scoring (only works for Yes/No or multiple-choice tasks).

### Step 2. Run Evaluation on LEGO-Puzzles

You can now run LEGO-Puzzles by simply setting the dataset name to `LEGO`.

**Inference + Evaluation**

```bash
python run.py --data LEGO --model <your_model_name> --verbose
# Example:
# python run.py --data LEGO --model idefics_80b_instruct --verbose
```

**Inference Only**

```bash
python run.py --data LEGO --model <your_model_name> --verbose --mode infer
# Example:
# python run.py --data LEGO --model idefics_80b_instruct --verbose --mode infer
```

**Multi-GPU Acceleration (Optional)**

```bash
torchrun --nproc-per-node=4 run.py --data LEGO --model <your_model_name> --verbose
# Example:
# torchrun --nproc-per-node=4 run.py --data LEGO --model idefics_80b_instruct --verbose
```


## Citation

If you find LEGO-Puzzles useful, please cite using this BibTeX:

```bibtex
@article{tang2025lego,
    title={LEGO-Puzzles: How Good Are MLLMs at Multi-Step Spatial Reasoning?},
    author={Tang, Kexian and Gao, Junyao and Zeng, Yanhong and Duan, Haodong and Sun, Yanan and Xing, Zhening and Liu,
    Wenran and Lyu, Kaifeng and Chen, Kai},
    journal={arXiv preprint arXiv:2503.19990},
    year={2025}
}
```