yuhangzang commited on
Commit
61944c2
·
verified ·
1 Parent(s): 41c24a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -62,16 +62,15 @@ We train Qwen2.5-VL-3B and Qwen2.5-VL-7B with our Spatial-SSRL paradigm and the
62
  </p>
63
 
64
  ## 🛠️ Usage
65
- To directly experience <strong>Spatial-SSRL-7B</strong>, you can try it out on [🤗Spatial-SSRL Space](https://huggingface.co/spaces/yuhangzang/Spatial-SSRL)!
66
 
67
- Here we provide a code snippet for you to start a simple trial of <strong>Spatial-SSRL-7B</strong> on your own device. You can download the model from 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a > before your trial!
68
  </p>
69
 
70
  ```python
71
  from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
72
  from qwen_vl_utils import process_vision_info
73
 
74
- model_path = "internlm/Spatial-SSRL-7B" #You can change it to your own local path if deployed already
75
  img_path = "examples/eg1.jpg"
76
  question = "Consider the real-world 3D locations of the objects. Which object has a higher location? A. yellow bear kite B. building"
77
  #We recommend using the format prompt to make the inference consistent with training
 
62
  </p>
63
 
64
  ## 🛠️ Usage
 
65
 
66
+ Here we provide a code snippet for you to start a simple trial of <strong>Spatial-SSRL-3B</strong> on your own device. You can download the model from 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-3B">Spatial-SSRL-3B Model</a > before your trial!
67
  </p>
68
 
69
  ```python
70
  from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
71
  from qwen_vl_utils import process_vision_info
72
 
73
+ model_path = "internlm/Spatial-SSRL-3B" #You can change it to your own local path if deployed already
74
  img_path = "examples/eg1.jpg"
75
  question = "Consider the real-world 3D locations of the objects. Which object has a higher location? A. yellow bear kite B. building"
76
  #We recommend using the format prompt to make the inference consistent with training