yuhangzang commited on
Commit
1cb39a3
·
verified ·
1 Parent(s): 1b174f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -11,19 +11,20 @@ tags:
11
  - multimodal
12
  - spatial
13
  - sptial understanding
14
- - seld-supervised learning
15
  library_name: transformers
16
  ---
17
 
18
  # Spatial-SSRL-7B
19
 
20
  📖<a href="https://arxiv.org/abs/2510.27606">Paper</a>| 🏠<a href="https://github.com/InternLM/Spatial-SSRL">Github</a> |🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a> |
21
- 🤗<a href="https://huggingface.co/datasets/internlm/Spatial-SSRL-81k">Spatial-SSRL-81k Dataset</a>
22
 
23
  Spatial-SSRL-7B is a large vision-language model targeting spatial understanding, built on the base of Qwen2.5-VL-7B. It's optimized by applying Spatial-SSRL, a lightweight self-supervised reinforcement learning
24
  paradigm which can scale RLVR efficiently. The model demonstrates strong spatial intelligence while preserving the original general visual capabilities of the base model.
25
 
26
  ## 📢 News
 
27
  - 🚀 [2025/11/03] We have released the [🤗Spatial-SSRL-7B Model](https://huggingface.co/internlm/Spatial-SSRL-7B),and [🤗Spatial-SSRL-81k Dataset](https://huggingface.co/datasets/internlm/Spatial-SSRL-81k).
28
  - 🚀 [2025/11/02] We have released the [🏠Spatial-SSRL Repository](https://github.com/InternLM/Spatial-SSRL).
29
 
@@ -57,7 +58,8 @@ We train Qwen2.5-VL-3B and Qwen2.5-VL-7B with our Spatial-SSRL paradigm and the
57
  </p>
58
 
59
  ## 🛠️ Usage
60
- <!--To directly experience <strong>Spatial-SSRL-7B</strong>, you can try it out on huggingface (link)! -->
 
61
  Here we provide a code snippet for you to start a simple trial of <strong>Spatial-SSRL-7B</strong> on your own device. You can download the model from 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a > before your trial!
62
  </p>
63
 
 
11
  - multimodal
12
  - spatial
13
  - sptial understanding
14
+ - self-supervised learning
15
  library_name: transformers
16
  ---
17
 
18
  # Spatial-SSRL-7B
19
 
20
  📖<a href="https://arxiv.org/abs/2510.27606">Paper</a>| 🏠<a href="https://github.com/InternLM/Spatial-SSRL">Github</a> |🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a> |
21
+ 🤗<a href="https://huggingface.co/datasets/internlm/Spatial-SSRL-81k">Spatial-SSRL-81k Dataset</a> | 📰<a href="https://huggingface.co/papers/2510.27606">Daily Paper</a>
22
 
23
  Spatial-SSRL-7B is a large vision-language model targeting spatial understanding, built on the base of Qwen2.5-VL-7B. It's optimized by applying Spatial-SSRL, a lightweight self-supervised reinforcement learning
24
  paradigm which can scale RLVR efficiently. The model demonstrates strong spatial intelligence while preserving the original general visual capabilities of the base model.
25
 
26
  ## 📢 News
27
+ - 🚀 [2025/11/03] Now you can try out Spatial-SSRL-7B on [🤗Spatial-SSRL Space](https://huggingface.co/spaces/yuhangzang/Spatial-SSRL).
28
  - 🚀 [2025/11/03] We have released the [🤗Spatial-SSRL-7B Model](https://huggingface.co/internlm/Spatial-SSRL-7B),and [🤗Spatial-SSRL-81k Dataset](https://huggingface.co/datasets/internlm/Spatial-SSRL-81k).
29
  - 🚀 [2025/11/02] We have released the [🏠Spatial-SSRL Repository](https://github.com/InternLM/Spatial-SSRL).
30
 
 
58
  </p>
59
 
60
  ## 🛠️ Usage
61
+ To directly experience <strong>Spatial-SSRL-7B</strong>, you can try it out on [🤗Spatial-SSRL Space](https://huggingface.co/spaces/yuhangzang/Spatial-SSRL)!
62
+
63
  Here we provide a code snippet for you to start a simple trial of <strong>Spatial-SSRL-7B</strong> on your own device. You can download the model from 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a > before your trial!
64
  </p>
65