MultihopSpatial Reasoning
Collection
MultihopSpatial: Multi-hop Compositional Spatial Reasoning Benchmark for Vision-Language Model • 3 items • Updated
This model is Qwen3-VL-4B-Instruct post-trained on MultihopSpatial-Train using GRPO (Group Relative Policy Optimization) for multi-hop spatial reasoning.
Project Page | Paper | Dataset
| Base Model | Qwen3-VL-4B-Instruct |
| Architecture | Qwen3VLForConditionalGeneration |
| Training Method | GRPO (Group Relative Policy Optimization) |
| Training Data | MultihopSpatial-Train (6,791 samples) |
| Precision | bfloat16 |
This model shares the same architecture and usage as Qwen3-VL-4B-Instruct. Please refer to the official Qwen3-VL documentation for detailed usage instructions.
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen3VLForConditionalGeneration.from_pretrained(
"etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct",
torch_dtype="auto",
device_map="auto",
)
processor = AutoProcessor.from_pretrained("etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct")
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "your_image.jpg"},
{"type": "text", "text": "From the perspective of the person wearing a red shirt, which object is on their left? (a) chair (b) table (c) lamp (d) bookshelf. And provide the bounding box coordinate of the region related to your answer."},
],
}
]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=2048)
output_text = processor.batch_decode(
generated_ids[:, inputs.input_ids.shape[1]:], skip_special_tokens=True
)
print(output_text[0])
@article{lee2025multihopspatial,
title={MultihopSpatial: Multi-hop Compositional Spatial Reasoning Benchmark for Vision-Language Models},
author={Lee, Youngwan and Jang, Soojin and Cho, Yoorhim and Lee, Seunghwan and Lee, Yong-Ju and Hwang, Sung Ju},
journal={arXiv preprint arXiv:2603.18892},
year={2025}
}
Base model
Qwen/Qwen3-VL-4B-Instruct