MultiHopSpatial-Qwen3-VL-4B-Instruct

This model is Qwen3-VL-4B-Instruct post-trained on MultihopSpatial-Train using GRPO (Group Relative Policy Optimization) for multi-hop spatial reasoning.

Project Page | Paper | Dataset

Model Details

Base Model Qwen3-VL-4B-Instruct
Architecture Qwen3VLForConditionalGeneration
Training Method GRPO (Group Relative Policy Optimization)
Training Data MultihopSpatial-Train (6,791 samples)
Precision bfloat16

Usage

This model shares the same architecture and usage as Qwen3-VL-4B-Instruct. Please refer to the official Qwen3-VL documentation for detailed usage instructions.

Quick Start

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info

model = Qwen3VLForConditionalGeneration.from_pretrained(
    "etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct",
    torch_dtype="auto",
    device_map="auto",
)
processor = AutoProcessor.from_pretrained("etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct")

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "your_image.jpg"},
            {"type": "text", "text": "From the perspective of the person wearing a red shirt, which object is on their left? (a) chair (b) table (c) lamp (d) bookshelf. And provide the bounding box coordinate of the region related to your answer."},
        ],
    }
]

text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
).to(model.device)

generated_ids = model.generate(**inputs, max_new_tokens=2048)
output_text = processor.batch_decode(
    generated_ids[:, inputs.input_ids.shape[1]:], skip_special_tokens=True
)
print(output_text[0])

Citation

@article{lee2025multihopspatial,
  title={MultihopSpatial: Multi-hop Compositional Spatial Reasoning Benchmark for Vision-Language Models},
  author={Lee, Youngwan and Jang, Soojin and Cho, Yoorhim and Lee, Seunghwan and Lee, Yong-Ju and Hwang, Sung Ju},
  journal={arXiv preprint arXiv:2603.18892},
  year={2025}
}
Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct

Finetuned
(216)
this model

Dataset used to train etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct

Collection including etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct

Paper for etri-vilab/MultiHopSpatial-Qwen3-VL-4B-Instruct