File size: 1,248 Bytes
f59c29d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- en
tags:
- abstract
- visual
- reasoning
- real-world
size_categories:
- 10K<n<100K
pretty_name: SpaCE-Eval

configs:
- config_name: default
  data_files:
  - split: test
    path: "data/*.parquet"

---
# SpaCE-Eval: A Benchmark for Real-World Multi-Modal Reasoning

Welcome to the official codebase of SpaCE-Eval! 


The [paper](https://openreview.net/forum?id=VAEkLS9VBr&noteId=QSQY2kkQHy) is accepted to ICLR 2026.


Code can be downloaded at:
https://github.com/xuyou-yang/SpaCE-Eval
## About the Benchmark

This benchmark provides a comprehensive evaluation of MLLMs across the following categories:
- Spatial Reasoning
- Commonsense Knowledge
- Environment Interaction

The dataset consists of newly created diagrams with image-question pairs, carefully curated through a standardized annotation and filtering pipeline.


### Citation

```bibtex
@inproceedings{yang2026spaceeval,
  title     = {SpaCE-Eval: A Benchmark for Real-World Multi-Modal Reasoning},
  author    = {Yang, Xuyou and Zhao, Yucheng and Zhang, Wenxuan and Koh, Immanuel},
  booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
  year      = {2026}

}