Add metadata, paper link, and dataset description

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +38 -3
README.md CHANGED
@@ -1,3 +1,38 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - multimodal
9
+ - vqa
10
+ - deep-research
11
+ ---
12
+
13
+ # VDR-Bench: Vision-DeepResearch Benchmark
14
+
15
+ [**Project Page**](https://osilly.github.io/Vision-DeepResearch/) | [**Paper**](https://huggingface.co/papers/2602.02185) | [**GitHub**](https://github.com/Osilly/Vision-DeepResearch)
16
+
17
+ Vision-DeepResearch Benchmark (VDR-Bench) is a comprehensive dataset comprising **2,000 VQA instances** designed to assess the behavior of Vision-DeepResearch systems under realistic real-world conditions.
18
+
19
+ The benchmark focuses on evaluating the visual and textual search capabilities of Multimodal Large Language Models (MLLMs), specifically addressing limitations in existing benchmarks such as textual cue leakage and overly idealized retrieval scenarios.
20
+
21
+ ## Dataset Summary
22
+ - **Scale**: 2,000 expert-curated VQA instances.
23
+ - **Goal**: To evaluate MLLMs on complex visual-textual fact-finding tasks using search engines.
24
+ - **Curation**: Created via a multi-stage curation pipeline and rigorous expert review to ensure answers require genuine visual search and cannot be inferred from prior knowledge.
25
+ - **Key Features**: Focuses on "visual search-centric" tasks where information must be retrieved from images rather than textual metadata or cross-textual cues.
26
+
27
+ ## Citation
28
+
29
+ If you find this benchmark useful for your research, please cite the following paper:
30
+
31
+ ```bibtex
32
+ @article{zeng2026vision,
33
+ title={Vision-DeepResearch Benchmark: Rethinking Visual and Textual Search for Multimodal Large Language Models},
34
+ author={Zeng, Yu and Huang, Wenxuan and Fang, Zhen and Chen, Shuang and Shen, Yufan and Cai, Yishuo and Wang, Xiaoman and Yin, Zhenfei and Chen, Lin and Chen, Zehui and Huang, Shiting and Zhao, Yiming and Hu, Yao and Torr, Philip and Ouyang, Wanli and Cao, Shaosheng},
35
+ journal={arXiv preprint arXiv:2602.02185},
36
+ year={2026}
37
+ }
38
+ ```