File size: 1,936 Bytes
df9c699 83b1e26 df9c699 83b1e26 df9c699 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | ---
license: cc-by-nc-4.0
language:
- en
pretty_name: VKnowU
configs:
- config_name: VKnowU_v1
data_files:
- split: test
path: VKnowU.json
---
# VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs <a href="https://arxiv.org/abs/2511.20272"> 📖ArXiv</a>

While Multimodal Large Language Models (MLLMs) have become adept at recognizing objects, they often lack the intuitive, human-like understanding of the world's underlying physical and social principles. This high-level vision-grounded semantics, which we term visual knowledge, forms a bridge between perception and reasoning, yet remains an underexplored area in current MLLMs.
To systematically evaluate this capability, we present [📊VKnowU](https://huggingface.co/datasets/OpenGVLab/VKnowU), a comprehensive benchmark featuring 1,680 questions in 1,249 videos, covering 8 core types of visual knowledge spanning both world-centric (e.g., intuitive physics) and human-centric (e.g., subjective intentions)

# Example
```
{
"qid": "OA@1",
"options": [
"A. The object that appears in the first clip",
"B. The object that appears in the second clip"
],
"solution": "<answer>B</answer>",
"problem_type": "multiple choice",
"problem": "Which object could be more easily reshaped by a child?",
"data_type": "video"
}
```
# Citation
If you find this work useful for your research, please consider citing VKnowU. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{jiang2025vknowu,
title={VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs},
author={Jiang, Tianxiang and Xia, Sheng and Xu, Yicheng and Wu, Linquan and Zeng, Xiangyu and Wang, Limin and Qiao, Yu and Wang, Yi},
journal={arXiv preprint arXiv:2511.20272},
year={2025}
}
``` |