WorldVQA / README.md
NarsAI's picture
Duplicate from moonshotai/WorldVQA
52fc2fd
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
language:
  - en
  - zh
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: train
        path: WorldVQA.tsv
    sep: "\t"

WorldVQA

WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models

HomePage | Dataset | Paper | Code

alt text

Abstract

We introduce WorldVQA, a benchmark designed to evaluate the atomic vision-centric world knowledge of Multimodal Large Language Models (MLLMs). Current evaluations often conflate visual knowledge retrieval with reasoning. In contrast, WorldVQA decouples these capabilities to strictly measure "what the model memorizes." The benchmark assesses the atomic capability of grounding and naming visual entities across a stratified taxonomy, spanning from common head-class objects to long-tail rarities. We expect WorldVQA serves as a rigorous test for visual factuality, thereby establishing a standard for assessing the encyclopedic breadth and hallucination rates of current and next-generation frontier models.

Details

WorldVQA is a meticulously curated benchmark designed to evaluate atomic vision-centric world knowledge in Multimodal Large Language Models (MLLMs). The dataset comprises 3,000 VQA pairs across 8 categories, with careful attention to linguistic and cultural diversity.

Note: Due to copyright concerns, the "People" category has been removed from this release. The original benchmark contains 3,500 VQA pairs across 9 categories.

alt text

Leaderboard

Our evaluation reveals significant gaps in visual encyclopedic knowledge, with no model surpassing the 50% accuracy threshold.

We show a mini-leaderboard here and please find more information in our paper or homepage.

Overall Performance

The leaderboard below shows the overall performance on WorldVQA (first 8 categories, excluding "People" due to systematic refusal behaviors in closed-source models):

alt text

Citation

If you find WorldVQA useful for your research, please cite our work:

@misc{worldvqa2025,
  title={WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models},
  author={MoonshotAI},
  year={2025},
  howpublished={\url{https://github.com/MoonshotAI/WorldVQA}},
}