Datasets:
| license: apache-2.0 | |
| task_categories: | |
| - visual-question-answering | |
| language: | |
| - en | |
| tags: | |
| - vision | |
| - question-answering | |
| - multimodal | |
| size_categories: | |
| - 1K<n<10K | |
| # RealXBench | |
| RealXBench is a comprehensive visual question answering benchmark dataset. The full dataset contains 300 high-quality image-question-answer triplets. Due to internal regulations, only a subset of 194 samples is released in this open-source version. | |
| ## Dataset Structure | |
| Each example contains: | |
| - **query**: The question about the image (in English) | |
| - **answer**: The ground truth answer(s), with multiple answers separated by "or" | |
| - **perception**: Difficulty level for perception task (1 if required, 0 otherwise) | |
| - **search**: Difficulty level for search task (1 if required, 0 otherwise) | |
| - **reason**: Difficulty level for reasoning task (1 if required, 0 otherwise) | |
| - **image**: The corresponding image file | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("glowol/RealXBench") | |
| ``` | |
| ## Citation | |
| If you use this dataset, please cite: | |
| ```bibtex | |
| @article{deepEyesV2, | |
| title={DeepEyesV2: Toward Agentic Multimodal Model}, | |
| author={Jack Hong and Chenxiao Zhao and ChengLin Zhu and Weiheng Lu and Guohai Xu and Xing Yu}, | |
| journal={arXiv preprint arXiv:2511.05271}, | |
| year={2025}, | |
| url={https://arxiv.org/abs/2511.05271} | |
| } | |
| ``` | |