nielsr's picture
nielsr HF Staff
Add model card and metadata for Vision-DeepResearch-8B
02966d0 verified
|
raw
history blame
2.64 kB
metadata
license: mit
library_name: transformers
pipeline_tag: image-text-to-text
base_model: Qwen/Qwen3-VL-8B-Instruct
tags:
  - vision
  - deep-research
  - vdr-bench
  - mllm

Vision-DeepResearch-8B (SFT-only)

Vision-DeepResearch-8B is a multimodal large language model (MLLM) optimized for deep research tasks involving complex visual and textual search. It is fine-tuned using Supervised Fine-Tuning (SFT) based on the Qwen3-VL-8B-Instruct architecture.

The model was developed to address limitations in existing VQA benchmarks by focusing on realistic visual retrieval and fact-finding scenarios.

Project Resources

Performance

Vision-DeepResearch-8B demonstrates significant improvements over agentic and RAG-based workflows across several benchmarks:

Model VDR MMSearch LiveVQA Avg.
Qwen3-VL-8B-Instruct (Agentic) 17.0 52.0 63.0 40.1
Vision-DeepResearch-8B (Ours) 29.2 69.6 76.7 50.5

Citation

If you find this model or benchmark useful for your research, please cite:

@article{zeng2026visiondeepresearch,
  title={Vision-DeepResearch: Incentivizing DeepResearch Capability in Multimodal Large Language Models},
  author={Yu Zeng and Wenxuan Huang and Zhen Fang and Shuang Chen and Yufan Shen and Yishuo Cai and Xiaoman Wang and Zhenfei Yin and Lin Chen and Zehui Chen and Shiting Huang and Yiming Zhao and Yao Hu and Philip Torr and Wanli Ouyang and Shaosheng Cao},
  journal={arXiv preprint arXiv:2601.22060},
  year={2026}
}

@article{zeng2026vdrbench,
  title={Vision-DeepResearch Benchmark: Rethinking Visual and Textual Search for Multimodal Large Language Models},
  author={Yu Zeng and Wenxuan Huang and Zhen Fang and Shuang Chen and Yufan Shen and Yishuo Cai and Xiaoman Wang and Zhenfei Yin and Lin Chen and Zehui Chen and Shiting Huang and Yiming Zhao and Yao Hu and Philip Torr and Wanli Ouyang and Shaosheng Cao},
  journal={arXiv preprint arXiv:2602.02185},
  year={2026}
}