EliYuan00 commited on
Commit
e4e8715
·
1 Parent(s): 0cf2320

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -44,6 +44,10 @@ for Comprehensive Video Understanding". It mainly consists of three parts: `vide
44
 
45
  In 2024, our [**Video-MME**](https://video-mme.github.io/) benchmark became a standard evaluation set for frontier models like Gemini and GPT. However, as model capabilities rapidly evolve, scores on existing benchmarks are saturating, yet a clear gap remains between **leaderboard performance and actual user experience**. This indicates that current evaluation paradigms fail to capture true video understanding abilities. To address this, we spent a year redesigning the evaluation system from first principles and now introduce **Video-MME v2**—a progressive and robust benchmark designed to drive the next generation of video understanding models.
46
 
 
 
 
 
47
  - **Dataset Size**
48
 
49
  The dataset consists of 800 videos and 3,200 QA pairs, with each video associated with four MCQ-based questions.
@@ -71,10 +75,6 @@ In 2024, our [**Video-MME**](https://video-mme.github.io/) benchmark became a st
71
 
72
  A non-linear scoring mechanism is applied to all question groups, and a first error truncation mechanism is used for reasoning coherence groups.
73
 
74
- <p align="center">
75
- <img src="assets/teaser.png" width="100%" height="100%">
76
- </p>
77
-
78
  ---
79
 
80
  # 🍺 Example
 
44
 
45
  In 2024, our [**Video-MME**](https://video-mme.github.io/) benchmark became a standard evaluation set for frontier models like Gemini and GPT. However, as model capabilities rapidly evolve, scores on existing benchmarks are saturating, yet a clear gap remains between **leaderboard performance and actual user experience**. This indicates that current evaluation paradigms fail to capture true video understanding abilities. To address this, we spent a year redesigning the evaluation system from first principles and now introduce **Video-MME v2**—a progressive and robust benchmark designed to drive the next generation of video understanding models.
46
 
47
+ <p align="center">
48
+ <img src="assets/teaser.png" width="100%" height="100%">
49
+ </p>
50
+
51
  - **Dataset Size**
52
 
53
  The dataset consists of 800 videos and 3,200 QA pairs, with each video associated with four MCQ-based questions.
 
75
 
76
  A non-linear scoring mechanism is applied to all question groups, and a first error truncation mechanism is used for reasoning coherence groups.
77
 
 
 
 
 
78
  ---
79
 
80
  # 🍺 Example