---
license: apache-2.0
task_categories:
- visual-question-answering
language:
- en
- zh
size_categories:
- 1K
## Details
**WorldVQA** is a meticulously curated benchmark designed to evaluate atomic vision-centric world knowledge in Multimodal Large Language Models (MLLMs). The dataset comprises **3,000 VQA pairs** across **8 categories**, with careful attention to linguistic and cultural diversity.
> **Note:** Due to copyright concerns, the "People" category has been removed from this release. The original benchmark contains 3,500 VQA pairs across 9 categories.

## Leaderboard
Our evaluation reveals significant gaps in visual encyclopedic knowledge, with no model surpassing the 50% accuracy threshold.
We show a mini-leaderboard here and please find more information in our paper or homepage.
### Overall Performance
The leaderboard below shows the overall performance on WorldVQA (first 8 categories, excluding "People" due to systematic refusal behaviors in closed-source models):

## Citation
If you find WorldVQA useful for your research, please cite our work:
```bibtex
@misc{zhou2026worldvqameasuringatomicworld,
title={WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models},
author={Runjie Zhou and Youbo Shao and Haoyu Lu and Bowei Xing and Tongtong Bai and Yujie Chen and Jie Zhao and Lin Sui and Haotian Yao and Zijia Zhao and Hao Yang and Haoning Wu and Zaida Zhou and Jinguo Zhu and Zhiqi Huang and Yiping Bao and Yangyang Liu and Y. Charles and Xinyu Zhou},
year={2026},
eprint={2602.02537},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.02537},
}
```