Image-to-Text
Transformers
Safetensors
qwen2_5_vl
custom_code
text-generation-inference
array commited on
Commit
8a6f122
·
verified ·
1 Parent(s): 9355d3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -10,7 +10,7 @@ base_model:
10
  ---
11
 
12
  - **Repository:** [https://github.com/arijitray1993/mull-tokens]
13
- - **Paper [optional]:** [https://arxiv.org/abs/2512.10941]
14
 
15
 
16
  ## How to Get Started with the Model
@@ -31,8 +31,13 @@ We use a custom Qwen2.5 VL model. There is no change to the architecture, just s
31
  ```
32
  % pip install qwen-vl-utils[decord]==0.0.8
33
 
 
34
  from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
35
 
 
 
 
 
36
  model = Qwen2_5_VLForConditionalGeneration.from_pretrained("array/Qwen2.5-VL-MullGRPO")
37
  processor = AutoProcessor.from_pretrained(
38
  "array/Qwen2.5-VL-MullGRPO",
 
10
  ---
11
 
12
  - **Repository:** [https://github.com/arijitray1993/mull-tokens]
13
+ - **Paper:** [https://arxiv.org/abs/2512.10941]
14
 
15
 
16
  ## How to Get Started with the Model
 
31
  ```
32
  % pip install qwen-vl-utils[decord]==0.0.8
33
 
34
+ import importlib
35
  from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
36
 
37
+ Qwen2_5_VLForConditionalGeneration = importlib.import_module(
38
+ 'models.mmlatentdiscrete_qwen_vl'
39
+ ).Qwen2_5_VLForConditionalGeneration
40
+
41
  model = Qwen2_5_VLForConditionalGeneration.from_pretrained("array/Qwen2.5-VL-MullGRPO")
42
  processor = AutoProcessor.from_pretrained(
43
  "array/Qwen2.5-VL-MullGRPO",