base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- OX-PIXL/STVQA-7K
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text
SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards
Model Description
Multimodal large language models (MLLMs) have achieved remarkable progress in vision–language tasks, but they continue to struggle with spatial understanding. Existing spatial MLLMs often rely on explicit 3D inputs or architecture-specific modifications, and remain constrained by large-scale datasets or sparse supervision. To address these limitations, we introduce SpatialThinker, a 3D-aware MLLM trained with RL to integrate structured spatial grounding with multi-step reasoning. The model simulates human-like spatial perception by constructing a scene graph of task-relevant objects and spatial relations, and reasoning towards an answer via dense spatial rewards.
SpatialThinker consists of two key contributions:
- A data synthesis pipeline that generates STVQA-7K, a high-quality spatial VQA dataset.
- Online RL with a multi-objective dense spatial reward enforcing spatial grounding.
SpatialThinker-7B outperforms supervised fine-tuning and the sparse RL baseline on spatial understanding and real-world VQA benchmarks, nearly doubling the base-model gain compared to sparse RL, and surpassing GPT-4o. These results showcase the effectiveness of combining spatial supervision with reward-aligned reasoning in enabling robust 3D spatial understanding with limited data and advancing MLLMs towards human-level visual reasoning.
Model Details
- Developed by: Hunar Batra, Haoqin Tu, Hardy Chen, Yuanze Lin, Cihang Xie, Ronald Clark
- Model type: 3D-aware Multimodal Large Language Model (MLLM)
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model: Qwen/Qwen2.5-VL-7B-Instruct
Model Sources
- Repository: https://github.com/hunarbatra/SpatialThinker
- Paper: https://huggingface.co/papers/2511.07403
- Project Page: https://hunarbatra.com/SpatialThinker/
How to Get Started with the Model
This model can be loaded and used directly with the Hugging Face transformers library.
First, ensure you have the necessary dependencies installed:
pip install transformers>=4.49.0
pip install flash-attn>=2.4.3 vllm>=0.7.3 # (vllm 0.8.0 recommended)
Then, you can use the following Python code snippet for inference:
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
from PIL import Image
import requests
from io import BytesIO
# Load model and processor
model_id = "OX-PIXL/SpatialThinker-7B" # This is the model repository ID
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16, # Use bfloat16 for better performance on compatible GPUs
device_map="auto",
trust_remote_code=True # Required for custom Qwen2.5-VL architecture
).eval() # Set model to evaluation mode
# Example image (replace with your own image path or URL)
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
response = requests.get(image_url)
image = Image.open(BytesIO(response.content)).convert("RGB")
# Define a spatial reasoning question
question = "What are the spatial relationships between the car, the road, and the trees?"
# Construct chat messages
messages = [
{"role": "user", "content": [
{"type": "image", "image": image},
{"type": "text", "text": question}
]}
]
# Apply chat template and process inputs
prompt = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
# Generate response
with torch.no_grad():
output_ids = model.generate(**inputs, max_new_tokens=512, do_sample=False) # Use suitable generation parameters
response_text = processor.decode(output_ids[0], skip_special_tokens=True)
print(f"Question: {question}
")
print(f"Answer: {response_text}")
Updates
- [2025/11/11] 🔥 Code base released.
- [2025/11/08] 🔥 Model Checkpoints and Dataset released.
Requirements
- Python 3.9+
transformers >= 4.49.0flash-attn >= 2.4.3vllm >= 0.7.3(0.8.0 recommended)
Installation
pip install -e .
Training Details
Training Procedure
SpatialThinker models are trained with STVQA-7K, Dense Spatial Rewards + GRPO. Baseline models (Vanilla GRPO) are also trained with STVQA-7K.
Train SpatialThinker Models with STVQA-7K, Dense Spatial Rewards + GRPO
bash scripts/spatialthinker_3b_grpo.sh
bash scripts/spatialthinker_7b_grpo.sh
Train Baseline Models (Vanilla GRPO) with STVQA-7K
bash scripts/qwen_2_5_3b_stvqa_vanilla_grpo.sh
bash scripts/qwen_2_5_7b_stvqa_vanilla_grpo.sh
Merge Checkpoints to Hugging Face Format
python3 scripts/model_merger.py --local_dir path_to_your_last_actor_checkpoint
Evaluation
To evaluate SpatialThinker or baseline models across spatial reasoning benchmarks, use the provided evaluation/eval.py script.
Basic Command Structure
python3 evaluation/eval.py \
--dataset <dataset_name> \
--template <prompt_template> \ # e.g. `reasoning`, `no_reasoning`, `spatial_thinker`
--model_path <model_or_checkpoint> \
--cuda <gpu_id> \
--batch_size <num_samples_per_step> \
[--provider <inference_backend>] \
[--processor_name <tokenizer_or_processor>] \
[--custom_filename <output_name>]
Example: Evaluate Across Multiple Benchmarks
python3 evaluation/eval.py \
--dataset blink-spatial \
--template spatial_thinker \
--model_path OX-PIXL/SpatialThinker-3B \
--cuda 0 \
--batch_size 4
python3 evaluation/eval.py \
--dataset spatialbench \
--template spatial_thinker \
--model_path OX-PIXL/SpatialThinker-3B \
--cuda 0 \
--batch_size 2
Example: Evaluate Using an API Provider (OpenAI / Anthropic)
python3 evaluation/eval.py \
--dataset stvqa \
--template reasoning \
--model_path gpt-4o-2024-05-13 \
--provider openai \
--batch_size 1
python3 evaluation/eval.py \
--dataset stvqa \
--template reasoning \
--model_path claude-3-5-sonnet \
--provider anthropic \
--batch_size 1
Supported Evaluation Datasets
cv-bench, cv-bench-2D, cv-bench-3D, blink-spatial, blink-depth, blink-object,blink-counting, blink-multi-view, blink-jigsaw, realworld_qa, spatialbench, mmvp, 3dsrbench,
lego, spatialreasoner, robospatial, robospatial_rgb, stvqa, hallusionbench.
Citation
If you find this repository useful in your project, please consider giving a ⭐ and citing:
@misc{batra2025spatialthinkerreinforcing3dreasoning,
title={SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards},
author={Hunar Batra and Haoqin Tu and Hardy Chen and Yuanze Lin and Cihang Xie and Ronald Clark},
year={2025},
eprint={2511.07403},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2511.07403},
}
Acknowledgements
This project builds upon the following open-source frameworks and works:
- EasyR1 — An efficient, scalable, multi-modality RL training framework based on veRL
- LLaMA-Factory — Unified efficient fine-tuning of 100+ LLMs & VLMs
- Qwen2.5-VL — Multimodal LLM series from the Qwen family