Image-Text-to-Text
Transformers
Safetensors
English
qwen3_vl
conversational
langdaohlb commited on
Commit
633a781
ยท
verified ยท
1 Parent(s): 6a53154

Update README

Browse files
Files changed (1) hide show
  1. README.md +123 -3
README.md CHANGED
@@ -1,3 +1,123 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen3-VL-2B-Instruct
4
+ datasets:
5
+ - inclusionAI/ZoomBench
6
+ - inclusionAI/ZwZ-RL-VQA
7
+ language:
8
+ - en
9
+ license: apache-2.0
10
+ library_name: transformers
11
+ pipeline_tag: image-text-to-text
12
+ ---
13
+
14
+ # ZwZ-2B
15
+
16
+ <div align="center">
17
+
18
+ ๐Ÿ“ƒ [Paper](https://arxiv.org/pdf/2602.11858) | ๐Ÿ  [Project](https://github.com/inclusionAI/Zooming-without-Zooming) | ๐Ÿค— [Collection](https://huggingface.co/collections/inclusionAI/zooming-without-zooming)
19
+
20
+ </div>
21
+
22
+ ## Model Summary
23
+
24
+ **ZwZ-2B** is a fine-grained multimodal perception model built upon [Qwen3-VL-2B](https://huggingface.co/Qwen/Qwen3-VL-2B). It is trained using **Region-to-Image Distillation (R2I)** combined with reinforcement learning, enabling superior fine-grained visual understanding in a single forward pass โ€” no inference-time zooming or tool calling required. ZwZ-2B achieves state-of-the-art performance on fine-grained perception benchmarks among open-source models of comparable size.
25
+
26
+
27
+ | Model | mmstar | hrbench-4k | hrbench-8k | vstar | cvbench-2d | cvbench-3d | countqa | colorbench | babyvision | mme-realworld-en | mme-realworld-cn | ZoomBench |
28
+ | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
29
+ | **[Qwen3-VL-2B](https://huggingface.co/Qwen/Qwen3-VL-2B)** | 60.4 | 71.75 | 70.12 | 72.77 | 75.45 | 82.42 | 22.19 | 76.86 | 12.11 | 59.52 | 60.77 | 41.30 |
30
+ | **ZwZ-2B** | 63.40 | 77.00 | 75.38 | 82.72 | 80.88 | 85.83 | 21.60 | 79.37 | 17.78 | 65.61 | 65.39 | 53.49 |
31
+
32
+ <div align=center>
33
+ <img src="gp_avg_comparison.png" width="90%" alt="avg_comparison"/>
34
+ </div>
35
+
36
+ ## Key Features
37
+
38
+ - **โšก Single-Pass Efficiency**: Achieves fine-grained perception in one forward pass, eliminating inference-time tool-calling overhead
39
+ - **๐ŸŽฏ Superior Accuracy**: State-of-the-art on perception benchmarks among open-source models
40
+ - **๐Ÿ“ˆ Broad Improvements**: Enhances not only perception benchmarks but also out-of-distribution generalization on visual reasoning, GUI agent, and AIGC detection
41
+
42
+ ## How It Works
43
+
44
+ Traditional "Thinking-with-Images" methods zoom into regions of interest during inference, incurring high latency from repeated tool calls and visual re-encoding. **ZwZ** transforms zooming from an inference-time tool into a training-time primitive:
45
+
46
+ 1. **Zoom in** to micro-cropped regions and let strong teacher models (Qwen3-VL-235B, GLM-4.5V) generate high-quality VQA data
47
+ 2. **Distill** this region-grounded supervision back to the full image with explicit bounding-box overlays
48
+ 3. **Reinforce** via RL training to enable single-glance fine-grained perception without tool use
49
+
50
+ ## Quickstart
51
+
52
+ ### Installation
53
+
54
+ ```bash
55
+ pip install transformers accelerate torch
56
+ ```
57
+
58
+ ### Inference
59
+
60
+ ```python
61
+ from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
62
+
63
+ # default: Load the model on the available device(s)
64
+ model = Qwen3VLForConditionalGeneration.from_pretrained(
65
+ "inclusionAI/ZwZ-2B", dtype="auto", device_map="auto"
66
+ )
67
+
68
+ processor = AutoProcessor.from_pretrained("inclusionAI/ZwZ-2B")
69
+
70
+ messages = [
71
+ {
72
+ "role": "user",
73
+ "content": [
74
+ {
75
+ "type": "image",
76
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
77
+ },
78
+ {"type": "text", "text": "Describe this image."},
79
+ ],
80
+ }
81
+ ]
82
+
83
+ # Preparation for inference
84
+ inputs = processor.apply_chat_template(
85
+ messages,
86
+ tokenize=True,
87
+ add_generation_prompt=True,
88
+ return_dict=True,
89
+ return_tensors="pt"
90
+ )
91
+ inputs = inputs.to(model.device)
92
+
93
+ # Inference: Generation of the output
94
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
95
+ generated_ids_trimmed = [
96
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
97
+ ]
98
+ output_text = processor.batch_decode(
99
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
100
+ )
101
+ print(output_text)
102
+ ```
103
+
104
+
105
+ ## Training Data
106
+
107
+ ZwZ-2B is trained on [inclusionAI/ZwZ-RL-VQA](https://huggingface.co/datasets/inclusionAI/ZwZ-RL-VQA), a 74K-sample Region-to-Image distilled VQA dataset synthesized from diverse image pools (SA-1B, LAION, MetaCLIP, Visual Genome, CC12M, STPLS3D).
108
+
109
+
110
+ ## Citation
111
+
112
+ ```bibtex
113
+ @article{wei2026zooming,
114
+ title={Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception},
115
+ author={Wei, Lai and He, Liangbo and Lan, Jun and Dong, Lingzhong and Cai, Yutong and Li, Siyuan and Zhu, Huijia and Wang, Weiqiang and Kong, Linghe and Wang, Yue and Zhang, Zhuosheng and Huang, Weiran},
116
+ journal={arXiv preprint arXiv:2602.11858},
117
+ year={2026}
118
+ }
119
+ ```
120
+
121
+ ## License
122
+
123
+ This model follows the license of [Qwen3-VL-2B](https://huggingface.co/Qwen/Qwen3-VL-2B). Please refer to the base model's license for usage terms.