honglyhly commited on
Commit
d3a21e2
·
verified ·
1 Parent(s): 01ca332

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -5
README.md CHANGED
@@ -1,15 +1,25 @@
1
- # WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs
 
 
2
 
 
3
 
4
- <font size=3><div align='left' > [[🏠 Project Page](https://jaaackhongggg.github.io/WorldSense/)] [[📖 arXiv Paper](https://arxiv.org/pdf/2502.04326)] [[🔍 Eval Code](https://github.com/open-compass/VLMEvalKit/tree/main)] [[🤗 Dataset](https://huggingface.co/datasets/honglyhly/WorldSense)] [[🏆 Leaderboard](https://jaaackhongggg.github.io/WorldSense/#leaderboard)] </div></font>
 
 
 
 
 
5
 
6
- WorldSense is the **first** benchmark to assess the real-world omni-modal understanding with _visual, audio, and text_ input. 🌟
7
 
 
 
8
 
9
  ---
10
 
11
  ## 🔥 News
12
- * **`2024.06.03`** 🌟 We release Video-MME, the first benchmark for real-world omnimodal understanding of MLLMs.
13
 
14
 
15
 
@@ -45,7 +55,7 @@ Please download our WorldSense from [here](https://huggingface.co/datasets/hongl
45
 
46
  ## 🔮 Evaluation Pipeline
47
  📍 **Evaluation**:
48
- Thanks for [VLMEvalkit](https://github.com/open-compass/VLMEvalKit), we can perform the evaluation of current MLLMs on WorldSense easily. Please refer to [VLMEvalkit](https://github.com/open-compass/VLMEvalKit) for details.
49
 
50
 
51
  📍 **Leaderboard**:
 
1
+ <div align="center">
2
+ <br>
3
+ <h1>WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs</h1>
4
 
5
+ Jack Hong<sup>1</sup>, [Shilin Yan](https://scholar.google.com/citations?user=2VhjOykAAAAJ&hl=zh-CN&oi=ao)<sup>1†</sup>, Jiayin Cai<sup>1</sup>, [Xiaolong Jiang](https://scholar.google.com/citations?user=G0Ow8j8AAAAJ&hl=zh-CN&oi=ao)<sup>1</sup>, [Yao Hu](https://scholar.google.com/citations?user=LIu7k7wAAAAJ&hl=en)<sup>1</sup>, [Weidi Xie](https://scholar.google.com/citations?user=Vtrqj4gAAAAJ&hl=en)<sup>2‡</sup>
6
 
7
+ <div class="is-size-6 publication-authors">
8
+ <p class="footnote">
9
+ <span class="footnote-symbol"><sup>†</sup></span>Project Leader
10
+ <span class="footnote-symbol"><sup>‡</sup></span>Corresponding Author
11
+ </p>
12
+ </div>
13
 
14
+ <sup>1</sup>Xiaohongshu Inc. <sup>2</sup>Shanghai Jiao Tong University
15
 
16
+ <font size=3><div align='center' > [[🏠 Project Page](https://jaaackhongggg.github.io/WorldSense/)] [[📖 arXiv Paper](https://arxiv.org/pdf/2502.04326)] [[🤗 Dataset](https://huggingface.co/datasets/honglyhly/WorldSense)] [[🏆 Leaderboard](https://jaaackhongggg.github.io/WorldSense/#leaderboard)] </div></font>
17
+ </div>
18
 
19
  ---
20
 
21
  ## 🔥 News
22
+ * **`2024.06.03`** 🌟 We release WorldSense, the first benchmark for real-world omnimodal understanding of MLLMs.
23
 
24
 
25
 
 
55
 
56
  ## 🔮 Evaluation Pipeline
57
  📍 **Evaluation**:
58
+ Thanks for the reproduction of our evaluation through [VLMEvalkit](https://github.com/open-compass/VLMEvalKit). Please refer to [VLMEvalkit](https://github.com/open-compass/VLMEvalKit) for details.
59
 
60
 
61
  📍 **Leaderboard**: