NarsAI Nealeon commited on
Commit
52fc2fd
·
0 Parent(s):

Duplicate from moonshotai/WorldVQA

Browse files

Co-authored-by: Haoyu Lu <Nealeon@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
60
+ WorldVQA.tsv filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ - zh
8
+ size_categories:
9
+ - 1K<n<10K
10
+ configs:
11
+ - config_name: default
12
+ data_files:
13
+ - split: train
14
+ path: "WorldVQA.tsv"
15
+ sep: "\t"
16
+ ---
17
+
18
+ # WorldVQA
19
+ ## WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models
20
+
21
+ <p align="center">
22
+ <a href="https://worldvqa2026.github.io/WorldVQA/"> HomePage</a> |
23
+ <a href="https://huggingface.co/datasets/moonshotai/WorldVQA"> Dataset</a> |
24
+ <a href="https://github.com/MoonshotAI/WorldVQA/blob/master/paper/worldvqa.pdf"> Paper</a> |
25
+ <a href="https://github.com/MoonshotAI/WorldVQA/"> Code</a>
26
+ </p>
27
+
28
+ ![alt text](images/barchart.png)
29
+
30
+ ## Abstract
31
+ We introduce WorldVQA, a benchmark designed to evaluate the atomic vision-centric world knowledge of Multimodal Large Language Models (MLLMs). Current evaluations often conflate visual knowledge retrieval with reasoning. In contrast, WorldVQA decouples these capabilities to strictly measure "what the model memorizes." The benchmark assesses the atomic capability of grounding and naming visual entities across a stratified taxonomy, spanning from common head-class objects to long-tail rarities. We expect WorldVQA serves as a rigorous test for visual factuality, thereby establishing a standard for assessing the encyclopedic breadth and hallucination rates of current and next-generation frontier models.
32
+ <img src="images/main_figure.jpg">
33
+
34
+ ## Details
35
+
36
+ **WorldVQA** is a meticulously curated benchmark designed to evaluate atomic vision-centric world knowledge in Multimodal Large Language Models (MLLMs). The dataset comprises **3,000 VQA pairs** across **8 categories**, with careful attention to linguistic and cultural diversity.
37
+
38
+ > **Note:** Due to copyright concerns, the "People" category has been removed from this release. The original benchmark contains 3,500 VQA pairs across 9 categories.
39
+
40
+ ![alt text](images/statistics.png)
41
+
42
+ ## Leaderboard
43
+
44
+ Our evaluation reveals significant gaps in visual encyclopedic knowledge, with no model surpassing the 50% accuracy threshold.
45
+
46
+ We show a mini-leaderboard here and please find more information in our paper or homepage.
47
+
48
+ ### Overall Performance
49
+
50
+ The leaderboard below shows the overall performance on WorldVQA (first 8 categories, excluding "People" due to systematic refusal behaviors in closed-source models):
51
+
52
+ ![alt text](images/leaderboard.png)
53
+
54
+ ## Citation
55
+
56
+ If you find WorldVQA useful for your research, please cite our work:
57
+
58
+ ```bibtex
59
+ @misc{worldvqa2025,
60
+ title={WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models},
61
+ author={MoonshotAI},
62
+ year={2025},
63
+ howpublished={\url{https://github.com/MoonshotAI/WorldVQA}},
64
+ }
65
+ ```
WorldVQA.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f04b60300e877a94a41d8b5cdeaee35f22995edd7d3db9406c00201e10369a5
3
+ size 3343211650
images/arxiv_small.svg ADDED
images/barchart.png ADDED

Git LFS Details

  • SHA256: c7181377e3ef21fca44946d4e66a52b695bf1f9abb875cb96d80a5e9331f3911
  • Pointer size: 131 Bytes
  • Size of remote file: 369 kB
images/github_small.svg ADDED
images/kimi_small.png ADDED

Git LFS Details

  • SHA256: 61bc910bcb3db0995e4dbf499f5b5a883f3473c3df441963d49ecc7eaba9160d
  • Pointer size: 131 Bytes
  • Size of remote file: 235 kB
images/leaderboard.png ADDED

Git LFS Details

  • SHA256: 57979ce1c7735477a9f3e933dde0b9384124b780b1a79b1c7e8c1f1ab271ad8c
  • Pointer size: 131 Bytes
  • Size of remote file: 495 kB
images/main_figure.jpg ADDED

Git LFS Details

  • SHA256: 91ee05aa4beb481df2a2037e063f23e1e635ed4cd73197e29a01cb5ca584a72d
  • Pointer size: 131 Bytes
  • Size of remote file: 357 kB
images/statistics.png ADDED

Git LFS Details

  • SHA256: 9e8c2978b2bf3fd3be129f77185e4abac309a7ee0c23aa326568ad24b062a563
  • Pointer size: 131 Bytes
  • Size of remote file: 199 kB