Datasets:
index
int64 0
3k
| category
stringclasses 8
values | question
stringclasses 583
values | answer
stringlengths 1
102
| image
stringlengths 3k
6.77M
| language
stringclasses 2
values | difficulty
stringclasses 3
values |
|---|---|---|---|---|---|---|
0
|
Nature & Environment
|
What breed of dog is in the picture?
|
Greek Hound
| "iVBORw0KGgoAAAANSUhEUgAAAYwAAAGlCAIAAABm+FbjAAAQAElEQVR4ASz9BZgs2XUlCmdQMjNjMeNl7r7NrZbUAluSZdljDdh(...TRUNCATED)
|
non-zh
|
medium
|
1
|
Nature & Environment
|
What breed of dog is in the picture?
|
European Russian Laika
| "iVBORw0KGgoAAAANSUhEUgAAAZMAAAE6CAIAAABcbSgQAAAQAElEQVR4AUz8B7dty3UeBlauWmmHk256AYFgADNp5iCKChSDmQn(...TRUNCATED)
|
non-zh
|
medium
|
2
|
Nature & Environment
|
What breed of dog is in the picture?
|
Maremma Sheepdog
| "iVBORw0KGgoAAAANSUhEUgAAAa0AAAIHCAIAAACjW5R1AAAQAElEQVR4AZT8ebBty3kfhvX39bSmPZzh3vseAM6KZkuUZJISCYq(...TRUNCATED)
|
non-zh
|
medium
|
3
|
Nature & Environment
|
What breed of cat is in the picture?
|
Munchkin cat
| "iVBORw0KGgoAAAANSUhEUgAAA28AAAHGCAIAAABsISSkAAAQAElEQVR4AZT9CZ9syW3mBwOIzKrupt7P8v7ssWfk8WghJYpkr2S(...TRUNCATED)
|
non-zh
|
hard
|
4
|
Nature & Environment
|
What breed of monkey is in the picture?
|
Night monkey
| "iVBORw0KGgoAAAANSUhEUgAAAV4AAAJOCAIAAABwd/hLAAAQAElEQVR4AcT9CYMkyW2mCQOwiMyu7qrqblLa+f7/n/h2Z6TRMZp(...TRUNCATED)
|
non-zh
|
easy
|
5
|
Nature & Environment
|
What breed of dog is in the picture?
|
Poodle
| "iVBORw0KGgoAAAANSUhEUgAAAyIAAAHsCAIAAAChUo7kAAAQAElEQVR4Aey9iZ7tunLeV8X9SpGuFEl5qFjX8ig/U6zBcpw4P9t(...TRUNCATED)
|
non-zh
|
easy
|
6
|
Nature & Environment
|
What breed of dog is in the picture?
|
Newfoundland
| "iVBORw0KGgoAAAANSUhEUgAAAw4AAAHaCAIAAAAmEIkKAAAQAElEQVR4AbT9B9ytW1UfCo/ZnrL62+vu9ezTDxzaEQQEIqDYALt(...TRUNCATED)
|
non-zh
|
easy
|
7
|
Nature & Environment
|
What kind of rabbit is in the picture?
|
Cat-rabbit
| "iVBORw0KGgoAAAANSUhEUgAAA2QAAAJbCAIAAAAE0cBTAAAQAElEQVR4Aaz957ttyXHeCUbk2vu6KgBUz8fRf9LzeXp6+hl1P/K(...TRUNCATED)
|
non-zh
|
medium
|
8
|
Nature & Environment
|
What bird is in the picture?
|
Striated Grassbird
| "iVBORw0KGgoAAAANSUhEUgAAAoAAAAJRCAYAAADVrwapAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsM(...TRUNCATED)
|
non-zh
|
medium
|
9
|
Nature & Environment
|
What bird is in the picture?
|
Grey-sided Laughingthrush
| "iVBORw0KGgoAAAANSUhEUgAAAu4AAAK9CAIAAAD0ZX7EAAAgAElEQVR4Aey9h3Mb+ZW2O/aIpCRKIpFzjiRyjo2ccyIA5gAmkCA(...TRUNCATED)
|
non-zh
|
medium
|
WorldVQA
WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models
HomePage | Dataset | Paper | Code
Abstract
We introduce WorldVQA, a benchmark designed to evaluate the atomic vision-centric world knowledge of Multimodal Large Language Models (MLLMs). Current evaluations often conflate visual knowledge retrieval with reasoning. In contrast, WorldVQA decouples these capabilities to strictly measure "what the model memorizes." The benchmark assesses the atomic capability of grounding and naming visual entities across a stratified taxonomy, spanning from common head-class objects to long-tail rarities. We expect WorldVQA serves as a rigorous test for visual factuality, thereby establishing a standard for assessing the encyclopedic breadth and hallucination rates of current and next-generation frontier models.

Details
WorldVQA is a meticulously curated benchmark designed to evaluate atomic vision-centric world knowledge in Multimodal Large Language Models (MLLMs). The dataset comprises 3,000 VQA pairs across 8 categories, with careful attention to linguistic and cultural diversity.
Note: Due to copyright concerns, the "People" category has been removed from this release. The original benchmark contains 3,500 VQA pairs across 9 categories.
Leaderboard
Our evaluation reveals significant gaps in visual encyclopedic knowledge, with no model surpassing the 50% accuracy threshold.
We show a mini-leaderboard here and please find more information in our paper or homepage.
Overall Performance
The leaderboard below shows the overall performance on WorldVQA (first 8 categories, excluding "People" due to systematic refusal behaviors in closed-source models):
Citation
If you find WorldVQA useful for your research, please cite our work:
@misc{worldvqa2025,
title={WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models},
author={MoonshotAI},
year={2025},
howpublished={\url{https://github.com/MoonshotAI/WorldVQA}},
}
- Downloads last month
- 15


