SpatialViz-Bench / README.md
Anonymous285714's picture
Update README.md
f38482a verified
metadata
language:
  - en
license: mit
dataset_info:
  features:
    - name: Category
      dtype: string
    - name: Task
      dtype: string
    - name: Level
      dtype: string
    - name: Image_id
      dtype: string
    - name: Question
      dtype: string
    - name: Choices
      list: string
    - name: Answer
      dtype: string
    - name: Explanation
      dtype: string
  splits:
    - name: test
      num_bytes: 916235
      num_examples: 1180
  download_size: 63865
  dataset_size: 916235
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

Spatial Visualization Benchmark

This repository contains the Spatial Visualization Benchmark.

The evaluation code is hosted on: https://github.com/Anonymous285714/SpatialViz-Bench .

Dataset Description

The SpatialViz-Bench aims to evaluate the spatial visualization capabilities of multimodal large language models, which is a key component of spatial abilities. Targeting 4 sub-abilities of Spatial Visualization, including mental rotation, mental folding, visual penetration, and mental animation, we have designed 3 tasks for each, forming a comprehensive evaluation system comprising 12 tasks in total. Each task is divided into 2 or 3 levels, with each level containing 40 or 50 test cases, resulting in a total of 1180 question-answer pairs.

Spatial Visualization

  • Mental Rotation
    • 2D Rotation: Two difficulty levels, based on paper size and pattern complexity.
    • 3D Rotation: Two difficulty levels, based on the size of the cube stack.
    • Three-view Projection: Two categories, orthographic views of cube stacks and orthographic views of part models.
  • Mental Folding
    • Paper Folding: Three difficulty levels, based on paper size, number of operations, and number of holes.
    • Cube Unfolding: Three difficulty levels, based on pattern complexity (whether the pattern is centrally symmetric).
    • Cube Reconstruction: Three difficulty levels, based on pattern complexity.
  • Visual Penetration
    • Cross-Section: Three difficulty levels, based on the number of combined objects and cross-section direction.
    • Cube Count Inference: Three difficulty levels, based on the number of reference views and the size of the cube stack.
    • Sliding Blocks: Two difficulty levels, based on the size of the cube stack and the number of disassembled blocks.
  • Mental Animation
    • Arrow Movement: Two difficulty levels, based on the number of arrows and the number of operations.
    • Block Movement: Two difficulty levels, based on the size of the cube stack and the number of movements.
    • Mechanical System: Two difficulty levels, based on the complexity of the system structure.
image-20250424001518418

Dataset Usage

Data Downloading

The test-00000-of-00001.parquet file contains the complete dataset annotations, ready for processing with HF Datasets. It can be loaded using the following code:

from datasets import load_dataset
SpatialViz_bench = load_dataset("Anonymous285714/SpatialViz-Bench") 

Additionally, we provide the images in *.zip. The hierarchical structure of the folder is as follows:

./SpatialViz_Bench_images
├── MentalAnimation
│   ├── ArrowMoving
│   │   ├── Level0
│   │   └── Level1
│   ├── BlockMoving
│   │   ├── Level0
│   │   └── Level1
│   └── MechanicalSystem
│       ├── Level0
│       └── Level1
├── MentalFolding
│   ├── PaperFolding
│   │   ├── Level0
│   │   ├── Level1
│   │   └── Level2
│   ├── CubeReconstruction
│   │   ├── Level0
│   │   ├── Level1
│   │   └── Level2
│   └── CubeUnfolding
│       ├── Level0
│       ├── Level1
│       └── Level2
├── MentalRotation
│   ├── 2DRotation
│   │   ├── Level0
│   │   └── Level1
│   ├── 3DRotation
│   │   ├── Level0
│   │   └── Level1
│   └── 3ViewProjection
│       ├── Level0-Cubes3View
│       └── Level1-CAD3View
└── VisualPenetration
    ├── CrossSection
    │   ├── Level0
    │   ├── Level1
    │   └── Level2
    ├── CubeCounting
    │   ├── Level0
    │   ├── Level1
    │   └── Level2
    └── CubeAssembly
        ├── Level0
        └── Level1

Data Format

The image_path can be obtained as follows:

print(SpatialViz_bench["test"][0])  # Print the first piece of data
category = SpatialViz_bench["test"][0]['Category']
task = SpatialViz_bench["test"][0]['Task']
level = SpatialViz_bench["test"][0]['Level']
image_id = SpatialViz_bench["test"][0]['Image_id']
question = SpatialViz_bench["test"][0]['Question']
choices = SpatialViz_bench["test"][0]['Choices']
answer = SpatialViz_bench["test"][0]['Answer']
explanation = SpatialViz_bench["test"][0]['Explanation']

image_path = f"./SpatialViz_Bench_images/{category}/{task}/{level}/{image_id}.png"

The dataset is provided in Parquet format and contains the following attributes:

{
    "Category": "MentalAnimation",
    "Task": "ArrowMoving",
    "Level": "Level0",
    "Image_id": "0-3-3-2",
    "Question": "In the diagram, the red arrow is the initial arrow, and the green arrow is the final arrow. The arrow can move in four directions (forward, backward, left, right), where 'forward' always refers to the current direction the arrow is pointing. After each movement, the arrow's direction is updated to the direction of movement. Which of the following paths can make the arrow move from the starting position to the ending position? Please answer from options A, B, C, or D.",
    "Choices": [
        "(Left, 2 units)--(Left, 1 unit)",
        "(Forward, 1 unit)--(Backward, 1 unit)",
        "(Forward, 1 unit)--(Backward, 2 units)",
        "(Forward, 1 unit)--(Left, 1 unit)"
    ],
    "Answer": "D",
    "Explanation": {
        "D": "Option D is correct because the initial arrow can be transformed into the final arrow.",
        "CAB": "Option CAB is incorrect because the initial arrow cannot be transformed into the final arrow."
    }
}

Evaluation Metric

Since most question options are represented in reference images, all tasks are constructed as multiple-choice questions, with each question having exactly one correct answer. The model's performance is evaluated based on the accuracy of its responses.