yuhangzang commited on
Commit
b39bcc2
·
verified ·
1 Parent(s): 54ba708

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - internlm/Spatial-SSRL-81k
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Qwen/Qwen2.5-VL-3B-Instruct
9
+ pipeline_tag: image-text-to-text
10
+ library_name: transformers
11
+ tags:
12
+ - multimodal
13
+ - spatial
14
+ - sptial understanding
15
+ - self-supervised learning
16
+ ---
17
+
18
+
19
+ # Spatial-SSRL-3B
20
+
21
+ 📖<a href="https://arxiv.org/abs/2510.27606">Paper</a>| 🏠<a href="https://github.com/InternLM/Spatial-SSRL">Github</a> |🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a> |
22
+ 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-Qwen3VL-4B">Spatial-SSRL-Qwen3VL-4B Model</a> |
23
+ 🤗<a href="https://huggingface.co/datasets/internlm/Spatial-SSRL-81k">Spatial-SSRL-81k Dataset</a> | 📰<a href="https://huggingface.co/papers/2510.27606">Daily Paper</a>
24
+
25
+ Spatial-SSRL-7B is a large vision-language model targeting spatial understanding, built on the base of Qwen2.5-VL-7B. It's optimized by applying Spatial-SSRL, a lightweight self-supervised reinforcement learning
26
+ paradigm which can scale RLVR efficiently. The model demonstrates strong spatial intelligence while preserving the original general visual capabilities of the base model.
27
+
28
+ ## 📢 News
29
+ - 🚀 [2025/11/24] We have released the [🤗Spatial-SSRL-Qwen3VL-4B Model](https://huggingface.co/internlm/Spatial-SSRL-Qwen3VL-4B), initialized from Qwen3-VL-4B-Instruct.
30
+ - 🚀 [2025/11/03] Now you can try out Spatial-SSRL-7B on [🤗Spatial-SSRL Space](https://huggingface.co/spaces/yuhangzang/Spatial-SSRL).
31
+ - 🚀 [2025/11/03] We have released the [🤗Spatial-SSRL-7B Model](https://huggingface.co/internlm/Spatial-SSRL-7B), and [🤗Spatial-SSRL-81k Dataset](https://huggingface.co/datasets/internlm/Spatial-SSRL-81k).
32
+ - 🚀 [2025/11/02] We have released the [🏠Spatial-SSRL Repository](https://github.com/InternLM/Spatial-SSRL).
33
+
34
+ ## 🌈 Overview
35
+ We are thrilled to introduce <strong>Spatial-SSRL</strong>, a novel self-supervised RL paradigm aimed at enhancing LVLM spatial understanding.
36
+ By optimizing Qwen2.5-VL-7B with Spatial-SSRL, the model exhibits stronger spatial intelligence across seven spatial understanding benchmarks in both image and video settings.
37
+ </p>
38
+ <p style="text-align: center;">
39
+ <img src="assets/teaser_1029final.png" alt="Teaser" width="100%">
40
+ </p>
41
+ Spatial-SSRL is a <strong>lightweight</strong> tool-free framework that is natually compatible with the RLVR training paradigm and easy to extend to a multitude of pretext tasks.
42
+ Five tasks are currently formulated in the framework, requiring only ordinary RGB and RGB-D images. <strong>And we welcome you to join Spatial-SSRL with effective pretext tasks to further strengthen the capabilities of LVLMs!</strong>
43
+
44
+ <p style="text-align: center;">
45
+ <img src="assets/pipeline_1029final.png" alt="Pipeline" width="100%">
46
+ </p>
47
+
48
+ ## 💡 Highlights
49
+ - 🔥 **Highly Scalable:** Spatial-SSRL uses ordinary raw RGB and RGB-D images instead of richly-annotated public datasets or manual labels for data curation, making it highly scalable.
50
+ - 🔥 **Cost-effective:** Avoiding the need for human labels or API calls for general LVLMs throughout the entire pipeline endows Spatial-SSRL with cost-effectiveness.
51
+ - 🔥 **Lightweight:** Prior approaches for spatial understanding heavily rely on annotation of external tools, incurring inherent errors in training data and additional cost. In constrast, Spatial-SSRL is completely tool-free and can easily be extended to more self-supervised tasks.
52
+ - 🔥 **Naturally Verifiable:** Intrinsic supervisory signals determined by pretext objectives are naturally verifiable, aligning Spatial-SSRL well with the RLVR paradigm.
53
+ <p style="text-align: center;">
54
+ <img src="assets/comparison_v2.png" alt="Teaser" width="100%">
55
+ </p>
56
+
57
+ ## 📊 Results
58
+ We train Qwen2.5-VL-3B and Qwen2.5-VL-7B with our Spatial-SSRL paradigm and the experimental results across seven spatial understanding benchmarks are shown below.
59
+ <p style="text-align: center;">
60
+ <img src="assets/exp_result.png" alt="Pipeline" width="100%">
61
+ </p>
62
+
63
+ ## 🛠️ Usage
64
+ To directly experience <strong>Spatial-SSRL-7B</strong>, you can try it out on [🤗Spatial-SSRL Space](https://huggingface.co/spaces/yuhangzang/Spatial-SSRL)!
65
+
66
+ Here we provide a code snippet for you to start a simple trial of <strong>Spatial-SSRL-7B</strong> on your own device. You can download the model from 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a > before your trial!
67
+ </p>
68
+
69
+ ```python
70
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
71
+ from qwen_vl_utils import process_vision_info
72
+
73
+ model_path = "internlm/Spatial-SSRL-7B" #You can change it to your own local path if deployed already
74
+ img_path = "examples/eg1.jpg"
75
+ question = "Consider the real-world 3D locations of the objects. Which object has a higher location? A. yellow bear kite B. building"
76
+ #We recommend using the format prompt to make the inference consistent with training
77
+ format_prompt = "\n You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}."
78
+
79
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
80
+ model_path, torch_dtype="auto", device_map="auto"
81
+ )
82
+ processor = AutoProcessor.from_pretrained(model_path)
83
+ messages = [
84
+ {
85
+ "role": "user",
86
+ "content": [
87
+ {
88
+ "type": "image",
89
+ "image": img_path,
90
+ },
91
+ {"type": "text", "text": question + format_prompt},
92
+ ],
93
+ }
94
+ ]
95
+
96
+ text = processor.apply_chat_template(
97
+ messages, tokenize=False, add_generation_prompt=True
98
+ )
99
+ image_inputs, video_inputs = process_vision_info(messages)
100
+ inputs = processor(
101
+ text=[text],
102
+ images=image_inputs,
103
+ videos=video_inputs,
104
+ padding=True,
105
+ return_tensors="pt",
106
+ )
107
+ inputs = inputs.to("cuda")
108
+
109
+ generated_ids = model.generate(**inputs, max_new_tokens=4096, do_sample=False)
110
+ generated_ids_trimmed = [
111
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
112
+ ]
113
+ output_text = processor.batch_decode(
114
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
115
+ )
116
+ print("Model Response:", output_text)
117
+ ```
118
+
119
+ ## Cases
120
+ <p style="text-align: center;">
121
+ <img src="assets/case1.jpg" alt="Teaser" width="100%">
122
+ </p>
123
+ <p style="text-align: center;">
124
+ <img src="assets/case2.jpg" alt="Teaser" width="100%">
125
+ </p>
126
+
127
+ ## ✒️Citation
128
+ If you find our model useful, please kindly cite:
129
+ ```
130
+ @article{liu2025spatial,
131
+ title={Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised Reinforcement Learning},
132
+ author={Liu, Yuhong and Zhang, Beichen and Zang, Yuhang and Cao, Yuhang and Xing, Long and Dong, Xiaoyi and Duan, Haodong and Lin, Dahua and Wang, Jiaqi},
133
+ journal={arXiv preprint arXiv:2510.27606},
134
+ year={2025}
135
+ }
136
+ ```
137
+
138
+ ## 📄 License
139
+ ![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg) ![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)
140
+
141
+ **Usage and License Notices**: The data and code are intended and licensed for research use only.