STARE / README.md
nielsr's picture
nielsr HF Staff
Add task category and dataset description
4bd22d2 verified
|
raw
history blame
11.4 kB
metadata
dataset_info:
  - config_name: 2d_text_instruct
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 27003938
        num_examples: 333
    download_size: 25554010
    dataset_size: 27003938
  - config_name: 2d_text_instruct_vsim
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 19596835
        num_examples: 219
    download_size: 17674709
    dataset_size: 19596835
  - config_name: 2d_va
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 39415795
        num_examples: 306
    download_size: 33389773
    dataset_size: 39415795
  - config_name: 2d_va_vsim
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 29400425
        num_examples: 204
    download_size: 24382687
    dataset_size: 29400425
  - config_name: 3d_text_instruct
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 780204029
        num_examples: 306
    download_size: 735537828
    dataset_size: 780204029
  - config_name: 3d_text_instruct_vsim
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 622694539
        num_examples: 204
    download_size: 534048417
    dataset_size: 622694539
  - config_name: 3d_va
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 1403998897
        num_examples: 306
    download_size: 931037779
    dataset_size: 1403998897
  - config_name: 3d_va_vsim
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 1142163774
        num_examples: 204
    download_size: 916512970
    dataset_size: 1142163774
  - config_name: folding_nets
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 2178828
        num_examples: 193
    download_size: 1272742
    dataset_size: 2178828
  - config_name: folding_nets_vsim
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 5146471
        num_examples: 120
    download_size: 4186214
    dataset_size: 5146471
  - config_name: perspective
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 91751332
        num_examples: 250
    download_size: 91741651
    dataset_size: 91751332
  - config_name: tangram_puzzle
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 24414049
        num_examples: 532
    download_size: 21586775
    dataset_size: 24414049
  - config_name: tangram_puzzle_vsim
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 43015112
        num_examples: 289
    download_size: 40223367
    dataset_size: 43015112
  - config_name: temporal
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: images
        sequence: image
      - name: other_info
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 226601655
        num_examples: 471
    download_size: 226586994
    dataset_size: 226601655
configs:
  - config_name: 2d_text_instruct
    data_files:
      - split: test
        path: 2d_text_instruct/test-*
  - config_name: 2d_text_instruct_vsim
    data_files:
      - split: test
        path: 2d_text_instruct_vsim/test-*
  - config_name: 2d_va
    data_files:
      - split: test
        path: 2d_va/test-*
  - config_name: 2d_va_vsim
    data_files:
      - split: test
        path: 2d_va_vsim/test-*
  - config_name: 3d_text_instruct
    data_files:
      - split: test
        path: 3d_text_instruct/test-*
  - config_name: 3d_text_instruct_vsim
    data_files:
      - split: test
        path: 3d_text_instruct_vsim/test-*
  - config_name: 3d_va
    data_files:
      - split: test
        path: 3d_va/test-*
  - config_name: 3d_va_vsim
    data_files:
      - split: test
        path: 3d_va_vsim/test-*
  - config_name: folding_nets
    data_files:
      - split: test
        path: folding_nets/test-*
  - config_name: folding_nets_vsim
    data_files:
      - split: test
        path: folding_nets_vsim/test-*
  - config_name: perspective
    data_files:
      - split: test
        path: perspective/test-*
  - config_name: tangram_puzzle
    data_files:
      - split: test
        path: tangram_puzzle/test-*
  - config_name: tangram_puzzle_vsim
    data_files:
      - split: test
        path: tangram_puzzle_vsim/test-*
  - config_name: temporal
    data_files:
      - split: test
        path: temporal/test-*
task_categories:
  - image-text-to-text

STARE

Evaluating Multimodal Models on Visual Simulations

An overview of our STARE.

😳 STARE: Unfolding Spatial Cognition

STARE is structured to comprehensively cover spatial reasoning at multiple complexity levels, from basic geometric transformations (2D and 3D) to more integrated tasks (cube net folding and tangram puzzles) and real-world spatial reasoning scenarios (temporal frame and perspective reasoning). Each task is presented as a multiple-choice or yes/no question using carefully designed visual and textual prompts. In total, the dataset contains about 4K instances across different evaluation setups.


Visual simulation of a cube net folding task reveals the challenges of spatial reasoning.

Models exhibit significant variation in spatial reasoning performance across STARE tasks. Accuracy is highest on simple 2D transformations (up to 87.7%) but drops substantially for 3D tasks and multi-step reasoning (e.g., cube nets, tangrams), often nearing chance. Visual simulations generally improve performance, though inconsistently across models. The reasoning-optimized o1 model performs best overall with VisSim, yet still lags behind humans. Human participants consistently outperform models, confirming the complexity of STARE tasks.

📖 Dataset Usage

You can download both two datasets by the following command (Taking downloading math data as an example):

from datasets import load_dataset

dataset = load_dataset("kuvvi/STARE", "folding_nets", split="test")

Data Format

The dataset is provided in jsonl format and contains the following attributes:

{
    "pid": [string] Problem ID, e.g., “2d_va_vsim_001”,
    "question": [string] The question text,
    "answer": [string] The correct answer for the problem,
    "images": [list] , The images that problem needs.
    "other_info": [string] Additional information about this question,
    "category": [string] The category of the problem, e.g., “2D_text_instruction”,
}

Requirements

git clone https://github.com/STARE-bench/STARE.git
cd STARE
git install -e .

📈 Evaluation

Responses Generation

Our repository supports the evaluation of open source models such as Qwen2-VL, InternVL, LLaVA, and closed source models such as GPT, Gemini, Claude, etc. You can generate responses of these models by using the following commands:

Open-source Model:

 python generate_response.py \
 --dataset_name 'kuvvi/STARE' \
 --split 'test' \
 --category '2D_text_instruct_VSim' \
 --strategy 'CoT' \
 --config_path 'configs/gpt.yaml' \
 --model_path 'path_to_your_local_model' \
 --output_path 'path_to_output_json_file' \
 --max_tokens 4096 \
 --temperature 0.7 \
 --save_every 20

Close-source Model:

 python generate_response.py \
 --dataset_name 'kuvvi/STARE' \
 --split 'test' \
 --category '2D_text_instruct_VSim' \
 --config_path 'configs/gpt.yaml' \
 --model 'remote-model-name' \
 --api_key '' \
 --output_path 'path_to_output_file_name.json' \
 --max_tokens 4096 \
 --temperature 0 \
 --save_every 20

Score Calculation

Finally, execute python evaluation/calculate_acc.py to calculate the final score based on the evaluation results. This step will compute overall accuracy as well as accuracy for each subject, category, and tasks.

📝Citation

If you find our benchmark useful in your research, please consider citing this BibTex:

@misc{li2025unfoldingspatialcognitionevaluating,
      title={Unfolding Spatial Cognition: Evaluating Multimodal Models on Visual Simulations}, 
      author={Linjie Li and Mahtab Bigverdi and Jiawei Gu and Zixian Ma and Yinuo Yang and Ziang Li and Yejin Choi and Ranjay Krishna},
      year={2025},
      eprint={2506.04633},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.04633}, 
}