Datasets:
metadata
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: SSI-Bench
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: test
path: SSI_Bench.parquet
dataset_info:
features:
- name: index
dtype: int64
- name: image
sequence: image
- name: question
dtype: string
- name: answer
dtype: string
- name: annotation_color
dtype: string
- name: category
dtype: string
- name: task
dtype: string
Thinking in Structures: Evaluating Spatial Intelligence through Reasoning on Constrained Manifolds
SSI-Bench is constructed from complex real-world 3D structures, where feasible configurations are tightly governed by geometric, topological, and physical constraints.
- Project page: https://ssi-bench.github.io/
- Code & evaluation: https://github.com/ccyydd/SSI-Bench
- Paper: https://arxiv.org/abs/2602.07864
News
- 🔥 [2026-2-10]: We released our paper, benchmark, and evaluation codes.
Load Dataset
from datasets import load_dataset
dataset = load_dataset("cyang203912/SSI-Bench")
print(dataset)
After downloading the parquet file, read each record, decode images from binary, and save them as JPG files.
import pandas as pd
import os
df = pd.read_parquet("SSI_Bench.parquet")
output_dir = "./images"
os.makedirs(output_dir, exist_ok=True)
for _, row in df.iterrows():
index_val = row["index"]
images = row["image"]
question = row["question"]
answer = row["answer"]
annotation_color = row["annotation_color"]
category = row["category"]
task = row["task"]
image_paths = []
if images is not None:
for n, img_data in enumerate(images):
image_path = f"{output_dir}/{index_val}_{n}.jpg"
with open(image_path, "wb") as f:
f.write(img_data)
image_paths.append(image_path)
else:
image_paths = []
print(f"index: {index_val}")
print(f"image: {image_paths}")
print(f"question: {question}")
print(f"answer: {answer}")
print(f"annotation_color: {annotation_color}")
print(f"category: {category}")
print(f"task: {task}")
print("-" * 50)
Usage
To evaluate, follow the scripts in the code repository: https://github.com/ccyydd/SSI-Bench.
Leaderboard
| Model | Avg. (%) | Type |
|---|---|---|
| Human Performance | 91.60 | Baseline |
| Gemini-3-Flash | 33.60 | Proprietary |
| Gemini-3-Pro | 29.50 | Proprietary |
| GPT-5.2 | 29.10 | Proprietary |
| Gemini-2.5-Pro | 26.10 | Proprietary |
| GPT-5 mini | 25.90 | Proprietary |
| Seed-1.8 | 25.90 | Proprietary |
| GPT-4o | 22.60 | Proprietary |
| GPT-4.1 | 22.40 | Proprietary |
| Gemini-2.5-Flash | 22.30 | Proprietary |
| GLM-4.6V | 22.20 | Open-source |
| Qwen3-VL-235B-A22B | 21.90 | Open-source |
| GLM-4.5V | 21.40 | Open-source |
| GLM-4.6V-Flash | 21.10 | Open-source |
| Qwen3-VL-4B | 20.70 | Open-source |
| InternVL3.5-30B-A3B | 20.70 | Open-source |
| Qwen3-VL-30B-A3B | 20.60 | Open-source |
| Llama-4-Scout-17B-16E | 20.60 | Open-source |
| Gemma-3-27B | 20.50 | Open-source |
| InternVL3.5-8B | 20.20 | Open-source |
| Claude-Sonnet-4.5 | 19.90 | Proprietary |
| Gemma-3-4B | 19.70 | Open-source |
| Qwen3-VL-8B | 19.20 | Open-source |
| Qwen3-VL-2B | 19.20 | Open-source |
| InternVL3.5-38B | 19.00 | Open-source |
| InternVL3.5-241B-A28B | 18.30 | Open-source |
| InternVL3.5-14B | 17.90 | Open-source |
| Gemma-3-12B | 17.30 | Open-source |
| LLaVA-Onevision-72B | 17.20 | Open-source |
| InternVL3.5-4B | 16.80 | Open-source |
| LLaVA-Onevision-7B | 16.50 | Open-source |
| Random Guessing | 12.85 | Baseline |
| InternVL3.5-2B | 11.10 | Open-source |
Citation
@article{yang2026thinking,
title={Thinking in Structures: Evaluating Spatial Intelligence through Reasoning on Constrained Manifolds},
author={Chen Yang and Guanxin Lin and Youquan He and Peiyao Chen and Guanghe Liu and Yufan Mo and Zhouyuan Xu and Linhao Wang and Guohui Zhang and Zihang Zhang and Shenxiang Zeng and Chen Wang and Jiansheng Fan},
journal={arXiv preprint arXiv:2602.07864},
year={2026}
}