Image-Text-to-Text
Transformers
Safetensors
English
qwen2_5_vl
conversational
text-generation-inference
ZwZ-7B / README.md
WaltonFuture's picture
Update README.md
4505455 verified
---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- inclusionAI/ZoomBench
- inclusionAI/ZwZ-RL-VQA
language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text
---
# ZwZ-7B
<div align="center">
📃 [Paper](https://huggingface.co/papers/2602.11858) | 🏠 [Project](https://github.com/inclusionAI/Zooming-without-Zooming) | 🤗 [Collection](https://huggingface.co/collections/inclusionAI/zooming-without-zooming)
</div>
## Model Summary
**ZwZ-7B** is a fine-grained multimodal perception model built upon [Qwen2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It is trained using **Region-to-Image Distillation (R2I)** combined with reinforcement learning, enabling superior fine-grained visual understanding in a single forward pass — no inference-time zooming or tool calling required. ZwZ-7B achieves leading performance on fine-grained perception benchmarks among open-source Qwen2.5-VL-7B-based models.
<div align=center>
<img src="gp_avg_comparison.png" width="90%" alt="avg_comparison"/>
</div>
## How It Works
Traditional "Thinking-with-Images" methods zoom into regions of interest during inference, incurring high latency from repeated tool calls and visual re-encoding. **ZwZ** transforms zooming from an inference-time tool into a training-time primitive:
1. **Zoom in** to micro-cropped regions and let strong teacher models (Qwen3-VL-235B, GLM-4.5V) generate high-quality VQA data
2. **Distill** this region-grounded supervision back to the full image with explicit bounding-box overlays
3. **Reinforce** via RL training to enable single-glance fine-grained perception without tool use
## Quickstart
### Installation
```bash
pip install transformers accelerate torch qwen-vl-utils
```
### Inference
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"inclusionAI/ZwZ-7B", torch_dtype="auto", device_map="auto"
)
# default processer
processor = AutoProcessor.from_pretrained("inclusionAI/ZwZ-7B")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
## Training Data
ZwZ-7B is trained on [inclusionAI/ZwZ-RL-VQA](https://huggingface.co/datasets/inclusionAI/ZwZ-RL-VQA), a 74K-sample Region-to-Image distilled VQA dataset synthesized from diverse image pools (SA-1B, LAION, MetaCLIP, Visual Genome, CC12M, STPLS3D).
## Evaluation
### ZoomBench
We introduce [ZoomBench](https://huggingface.co/datasets/inclusionAI/ZoomBench), a challenging benchmark with 845 samples across 6 fine-grained dimensions: counting, OCR, color attributes, structural attributes, material attributes, and object identification.
### Benchmark Results
ZwZ-7B achieves state-of-the-art performance among open-source models on fine-grained perception benchmarks. Please refer to the [paper](https://huggingface.co/papers/2602.11858) for detailed results.
## Limitations
- Performance is optimized for fine-grained perception tasks; for general-purpose multimodal understanding, the base model (Qwen2.5-VL-7B) may be more suitable in some cases.
- Very high-resolution images may require appropriate preprocessing for optimal results.
- The model inherits the base model's context length limitations.
## Citation
```bibtex
@article{wei2026zooming,
title={Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception},
author={Wei, Lai and He, Liangbo and Lan, Jun and Dong, Lingzhong and Cai, Yutong and Li, Siyuan and Zhu, Huijia and Wang, Weiqiang and Kong, Linghe and Wang, Yue and Zhang, Zhuosheng and Huang, Weiran},
journal={arXiv preprint arXiv:2602.11858},
year={2026}
}
```
## License
This model follows the license of Apache License 2.0.