SpaCE-Eval / README.md
XuyouYang's picture
Upload folder using huggingface_hub
f59c29d verified
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - abstract
  - visual
  - reasoning
  - real-world
size_categories:
  - 10K<n<100K
pretty_name: SpaCE-Eval
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/*.parquet

SpaCE-Eval: A Benchmark for Real-World Multi-Modal Reasoning

Welcome to the official codebase of SpaCE-Eval!

The paper is accepted to ICLR 2026.

Code can be downloaded at: https://github.com/xuyou-yang/SpaCE-Eval

About the Benchmark

This benchmark provides a comprehensive evaluation of MLLMs across the following categories:

  • Spatial Reasoning
  • Commonsense Knowledge
  • Environment Interaction

The dataset consists of newly created diagrams with image-question pairs, carefully curated through a standardized annotation and filtering pipeline.

Citation

@inproceedings{yang2026spaceeval,
  title     = {SpaCE-Eval: A Benchmark for Real-World Multi-Modal Reasoning},
  author    = {Yang, Xuyou and Zhao, Yucheng and Zhang, Wenxuan and Koh, Immanuel},
  booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
  year      = {2026}

}