Datasets:

Modalities:
Text
Video
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
VKnowU / README.md
root
1218
83b1e26
metadata
license: cc-by-nc-4.0
language:
  - en
pretty_name: VKnowU
configs:
  - config_name: VKnowU_v1
    data_files:
      - split: test
        path: VKnowU.json

VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs 📖ArXiv

📖

While Multimodal Large Language Models (MLLMs) have become adept at recognizing objects, they often lack the intuitive, human-like understanding of the world's underlying physical and social principles. This high-level vision-grounded semantics, which we term visual knowledge, forms a bridge between perception and reasoning, yet remains an underexplored area in current MLLMs.

To systematically evaluate this capability, we present 📊VKnowU, a comprehensive benchmark featuring 1,680 questions in 1,249 videos, covering 8 core types of visual knowledge spanning both world-centric (e.g., intuitive physics) and human-centric (e.g., subjective intentions)

Overview of ExpVid

Example

{
    "qid": "OA@1",
    "options": [
        "A. The object that appears in the first clip",
        "B. The object that appears in the second clip"
    ],
    "solution": "<answer>B</answer>",
    "problem_type": "multiple choice",
    "problem": "Which object could be more easily reshaped by a child?",
    "data_type": "video"
}

Citation

If you find this work useful for your research, please consider citing VKnowU. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.

@article{jiang2025vknowu,
  title={VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs},
  author={Jiang, Tianxiang and Xia, Sheng and Xu, Yicheng and Wu, Linquan and Zeng, Xiangyu and Wang, Limin and Qiao, Yu and Wang, Yi},
  journal={arXiv preprint arXiv:2511.20272},
  year={2025}
}