sphinx / README.md
maveryn's picture
Update README.md
939734c verified
metadata
license: mit
task_categories:
  - image-text-to-text
language:
  - en
tags:
  - visual-reasoning
  - synthetic
  - multimodal
  - benchmark
  - vision-language
arxiv: 2511.20814
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: eval
        path: data/eval-*
dataset_info:
  features:
    - name: images
      list: image
    - name: problem
      dtype: string
    - name: answer
      dtype: string
    - name: task
      dtype: string
  splits:
    - name: train
      num_bytes: 1511015259
      num_examples: 32000
    - name: eval
      num_bytes: 135942602
      num_examples: 2500
  download_size: 1625026463
  dataset_size: 1646957861

SPHINX: A Synthetic Environment for Visual Perception and Reasoning

SPHINX is a synthetic multimodal reasoning benchmark and data-generation environment built around verifiable visual reasoning tasks. It contains procedurally generated puzzles spanning symmetry, geometric transformations, spatial reasoning, chart interpretation, and sequence prediction, each paired with a ground-truth answer. The dataset is designed both for precise evaluation of vision-language models and for large-scale supervised or reinforcement-learning-style post-training.

Links

Dataset Summary

  • Training examples: 32,000
  • Evaluation examples: 2,500
  • Task families: 25
  • Modalities: image + text question -> text answer

Each example contains:

  • images: one or more images associated with the problem
  • problem: the natural-language question or instruction
  • answer: the verified ground-truth answer
  • task: the task name

What SPHINX Covers

SPHINX includes 25 procedurally generated tasks across several core reasoning categories:

  • symmetry and pattern completion
  • geometric and spatial reasoning
  • chart and proportion understanding
  • transformations and analogical matching
  • arithmetic and visual sequence prediction
  • tile-based composition, counting, and path reasoning

The emphasis is on controlled synthetic tasks with verifiable answers, so model performance can be evaluated more cleanly than on open-ended real-world datasets alone.

Loading the Dataset

from datasets import load_dataset

train_ds = load_dataset("maveryn/sphinx", split="train")
eval_ds = load_dataset("maveryn/sphinx", split="eval")

print(train_ds[0].keys())
print(train_ds[0]["task"])
print(train_ds[0]["problem"])
print(train_ds[0]["answer"])

Typical Usage

SPHINX can be used for:

  • benchmarking multimodal reasoning systems
  • analyzing per-task failure modes in vision-language models
  • building synthetic training data for multimodal post-training
  • evaluating transfer from verifiable synthetic reasoning to external benchmarks

Citation

If you use SPHINX, please cite:

@inproceedings{alam2026sphinx,
  author    = {Md Tanvirul Alam and Saksham Aggarwal and Justin Yang Chae and Nidhi Rastogi},
  title     = {SPHINX: A Synthetic Environment for Visual Perception and Reasoning},
  booktitle = {2026 IEEE/CVF Conference on Computer Vision and Pattern Recognition- FINDINGS Track (CVPRF)},
  year      = {2026}
}