Q-Zoom-Qwen3VL-4B
Q-Zoom is a query-aware adaptive high-resolution perception framework for Multimodal Large Language Models that operates in an efficient coarse-to-fine manner. Instead of indiscriminately flooding the quadratic self-attention with redundant high-resolution tokens, Q-Zoom adds two lightweight modules on top of a pretrained MLLM:
- A Dynamic Gating Network (TWIG) that safely bypasses high-resolution processing whenever the coarse global features already suffice.
- A Self-Distilled Region Proposal Network (SD-RPN) that, when high-resolution perception is needed, precisely localizes the task-relevant Region-of-Interest (RoI) directly from the MLLM's own intermediate feature space — no extra annotation, no external detector.
This checkpoint is the Stage-3 Q-Zoom finetune of Qwen3-VL-4B-Instruct.
Configuration
| Backbone | TWIG-K | TWIG threshold | Base model |
|---|---|---|---|
| Qwen3-VL-4B-Instruct | 24 | 3 | Qwen/Qwen3-VL-4B-Instruct |
- K is the LLM layer index at which the gating head reads hidden states to decide whether the high-res RoI re-decode should fire.
- T is the TWIG threshold expressed as the gate-score percentile used during training (lower → more aggressive RoI use at eval).
Highlights
- Q-Zoom accelerates inference at matched accuracy on both Doc/OCR and high-resolution vision benchmarks, and configured for maximum perceptual fidelity it surpasses the parent backbone's peak accuracy. See the project page for the full per-backbone Pareto curves and the paper for the headline numbers (e.g. 2.52× Doc/OCR / 4.39× HR speedups, +1.1% / +8.1% over peak on the Qwen2.5-VL-7B backbone).
- The same recipe transfers across Qwen2.5-VL (3B / 7B), Qwen3-VL, LLaVA-1.5 (7B / 13B) and emerging RL-based thinking-with-image models.
All evaluation results in the paper are reported under a per-single-image constraint of either 576 (Doc/OCR) or 4,096 (HR/Vision) maximum visual tokens.
Quick start
1. Install the matching environment
Q-Zoom touches model-private internals of the backbones, so the
required transformers version differs per family:
| Backbone family | transformers pin |
Conda env |
|---|---|---|
| Qwen2.5-VL | transformers==4.51.3 |
qzoom-q25 |
| Qwen3-VL | transformers==4.57.1 |
qzoom-q3 |
The repo's install.sh handles both pins automatically:
git clone https://github.com/YuHengsss/Q-Zoom.git
cd Q-Zoom
bash install.sh qwen3vl
conda activate qzoom-q3
2. Download the checkpoint
huggingface-cli download YuhengSSS/Q-Zoom-Qwen3VL-4B \
--local-dir ./checkpoints/Q-Zoom-Qwen3VL-4B \
--local-dir-use-symlinks False
3. Run the standard Q-Zoom evaluation suite
CHECKPOINT_PATH=./checkpoints/Q-Zoom-Qwen3VL-4B \
NUM_GPUS=4 \
bash examples/eval_only/eval_qwen3vl_stage3.sh
This runs the standard Q-Zoom benchmark suite (TextVQA, InfoVQA, ChartQA,
OCRBench, DocVQA, V*Bench, MME-RealWorld-Lite, HRBench) with the
gating-aware decoding loop. Set TWO_STAGE_ROI=False to disable Q-Zoom and
fall back to vanilla decoding.
At inference time, Q-Zoom always produces a direct response from the low-resolution pass; the high-res gating head decides per sample whether to also produce a RoI-based response by re-decoding the cropped region predicted by the SD-RPN attention map.
Training data
This checkpoint was finetuned with the data hosted at YuhengSSS/Q-Zoom-Training:
- Stage-1 SD-RPN pseudo-labels (per token attention maps)
- Stage-2 judged Post-SFT JSONLs (consistency-aware sample generation)
- Stage-3 ROI re-decode pickles (per-image RoI boxes + answer pairs)
See DATASETS.md in the GitHub repo for the per-stage filenames.
Citation
@article{qzoom,
title = {Q-Zoom: Query-Aware Adaptive Perception for Efficient
Multimodal Large Language Models},
author = {Shi, Yuheng and Pei, Xiaohuan and Wen, Linfeng and
Dong, Minjing and Xu, Chang},
journal= {arXiv preprint arXiv:2604.06912},
year = {2026}
}
You may also be interested in our earlier work that introduced the self-distilled RoI predictor used by Q-Zoom's SD-RPN branch:
@article{shi2025catching,
title = {Catching the Details: Self-Distilled RoI Predictors for
Fine-Grained MLLM Perception},
author = {Shi, Yuheng and Pei, Xiaohuan and Dong, Minjing and Xu, Chang},
journal= {arXiv preprint arXiv:2509.16944},
year = {2025}
}
License
Apache 2.0. The checkpoint inherits the license of the base model
Qwen/Qwen3-VL-4B-Instruct; please respect both.
Links
- 📄 Paper: https://arxiv.org/abs/2604.06912
- 🌐 Project page: https://yuhengsss.github.io/Q-Zoom/
- 💻 Code: https://github.com/YuHengsss/Q-Zoom
- 📦 Training data: https://huggingface.co/datasets/YuhengSSS/Q-Zoom-Training
- 🤗 Collection: https://huggingface.co/collections/YuhengSSS/q-zoom
- Downloads last month
- 29
Model tree for YuhengSSS/Q-Zoom-Qwen3VL-4B
Base model
Qwen/Qwen3-VL-4B-Instruct