---
license: apache-2.0
task_categories:
- visual-question-answering
language:
- en
- zh
size_categories:
- 1K
## Details
**WorldVQA** is a meticulously curated benchmark designed to evaluate atomic vision-centric world knowledge in Multimodal Large Language Models (MLLMs). The dataset comprises **3,000 VQA pairs** across **8 categories**, with careful attention to linguistic and cultural diversity.
> **Note:** Due to copyright concerns, the "People" category has been removed from this release. The original benchmark contains 3,500 VQA pairs across 9 categories.

## Leaderboard
Our evaluation reveals significant gaps in visual encyclopedic knowledge, with no model surpassing the 50% accuracy threshold.
We show a mini-leaderboard here and please find more information in our paper or homepage.
### Overall Performance
The leaderboard below shows the overall performance on WorldVQA (first 8 categories, excluding "People" due to systematic refusal behaviors in closed-source models):

## Citation
If you find WorldVQA useful for your research, please cite our work:
```bibtex
@misc{worldvqa2025,
title={WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models},
author={MoonshotAI},
year={2025},
howpublished={\url{https://github.com/MoonshotAI/WorldVQA}},
}
```