honglyhly commited on
Commit
a4ba656
·
verified ·
1 Parent(s): 88de3cc

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -3
README.md CHANGED
@@ -1,3 +1,114 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs
2
+
3
+
4
+ <font size=7><div align='center' > [[🏠 Project Page](https://jaaackhongggg.github.io/WorldSense/)] [[📖 arXiv Paper](https://arxiv.org/pdf/2502.04326)] [[🤗 Dataset](https://huggingface.co/datasets/honglyhly/WorldSense)][[🏆 Leaderboard](https://jaaackhongggg.github.io/WorldSense/#leaderboard)] </div></font>
5
+
6
+ WorldSense is the **first** benchmark to assess the real-world omni-modal understanding with _visual, audio, and text_ input. 🌟
7
+
8
+
9
+ ---
10
+
11
+ ## 🔥 News
12
+ * **`2024.06.03`** 🌟 We release Video-MME, the first benchmark for real-world omnimodal understanding of MLLMs.
13
+
14
+
15
+
16
+ ## 👀 WorldSense Overview
17
+
18
+ we introduce **WorldSense**, the **first** benchmark to assess the multi-modal video understanding, that simultaneously encompasses _visual, audio, and text_ inputs. In contrast to existing benchmarks, our **WorldSense** has several features:
19
+
20
+ * **Collaboration of omni-modality**. We design the evaluation tasks to feature a strong coupling of audio and video, requiring models to effectively utilize the **synergistic perception of omni-modality**;
21
+ * **Diversity of videos and tasks**. WorldSense encompasses a diverse collection of **1,662** audio-visual synchronised videos, systematically categorized into **8** primary domains and **67** fine-grained subcategories to cover the broad scenarios, and **3,172** multi-choice QA pairs across **26** distinct tasks to enable the comprehensive evaluation;
22
+ * **High-quality annotations**. All the QA pairs are manually labeled by 80 expert annotators with multiple rounds of correction to ensure quality.
23
+
24
+ Based on our **WorldSense**, we extensively evaluate various state-of-the-art models. The experimental results indicate that existing models face significant challenges in understanding real-world scenarios (48% best accuracy). We hope our **WorldSense** can provide a platform for evaluating the ability in constructing and understanding coherent contexts from omni-modality.
25
+
26
+
27
+
28
+ <p align="center">
29
+ <img src="./asset/distribution.png" width="100%" height="100%">
30
+ </p>
31
+
32
+ ## 📐 Dataset Examples
33
+
34
+ <p align="center">
35
+ <img src="./asset/sample.png" width="100%" height="100%">
36
+ </p>
37
+
38
+
39
+
40
+
41
+ ## 🔍 Dataset
42
+ Please download our WorldSense from [here](https://huggingface.co/datasets/honglyhly/WorldSense).
43
+
44
+
45
+
46
+ ## 🔮 Evaluation Pipeline
47
+ 📍 **Evaluation**:
48
+ Thanks for [VLMEvalkit](https://github.com/open-compass/VLMEvalKit), we can perform the evaluation of current MLLMs on WorldSense easily. Please refer to [VLMEvalkit](https://github.com/open-compass/VLMEvalKit) for details.
49
+
50
+
51
+ 📍 **Leaderboard**:
52
+
53
+ If you want to add your model to our [leaderboard](https://jaaackhongggg.github.io/WorldSense/#leaderboard), please contact **jaaackhong@gmail.com**.
54
+
55
+
56
+ ## 📈 Experimental Results
57
+ - **Evaluation results of sota MLLMs.**
58
+
59
+ <p align="center">
60
+ <img src="./asset/overall_performance.png" width="96%" height="50%">
61
+ </p>
62
+
63
+
64
+ - **Fine-grained results on task category.**
65
+
66
+ <p align="center">
67
+ <img src="./asset/fine_task.png" width="96%" height="50%">
68
+ </p>
69
+
70
+ - **Fine-grained results on audio type.**
71
+
72
+ <p align="center">
73
+ <img src="./asset/fine_audio.png" width="96%" height="50%">
74
+ </p>
75
+
76
+ - **In-depth analysis for real-world omnimodal understanding.**
77
+
78
+ <center>Impact of vision information.</center>
79
+ <p align="center">
80
+ <img src="./asset/ablation_vision.png" width="96%" height="96%">
81
+ </p>
82
+
83
+ <center>Impact of audio information.</center>
84
+ <p align="center">
85
+ <img src="./asset/ablation_audio.png" width="96%" height="96%">
86
+ </p>
87
+
88
+ <center>Impact of audio information for Video MLLMs.</center>
89
+ <p align="center">
90
+ <img src="./asset/ablation_audio_v.png" width="96%" height="96%">
91
+ </p>
92
+
93
+ <center>Impact of video frames.</center>
94
+ <p align="center">
95
+ <img src="./asset/video_frame_curve.png" width="96%" height="96%">
96
+ </p>
97
+
98
+
99
+
100
+ ## 📖 Citation
101
+
102
+ If you find WorldSense helpful for your research, please consider citing our work. Thanks!
103
+
104
+ ```bibtex
105
+ @article{hong2025worldsenseevaluatingrealworldomnimodal,
106
+ title={WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs},
107
+ author={Jack Hong and Shilin Yan and Jiayin Cai and Xiaolong Jiang and Yao Hu and Weidi Xie},
108
+ year={2025},
109
+ eprint={2502.04326},
110
+ archivePrefix={arXiv},
111
+ primaryClass={cs.CV},
112
+ url={https://arxiv.org/abs/2502.04326},
113
+ }
114
+ ```