EliYuan00 commited on
Commit
ff85b25
·
1 Parent(s): cb8b505

Update README.md 2026/4/6

Browse files
Files changed (1) hide show
  1. README.md +100 -1
README.md CHANGED
@@ -12,4 +12,103 @@ tags:
12
  pretty_name: Video-MME-v2
13
  size_categories:
14
  - 1K<n<10K
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  pretty_name: Video-MME-v2
13
  size_categories:
14
  - 1K<n<10K
15
+ ---
16
+
17
+ # Video-MME-v2
18
+
19
+ <p align="center">
20
+ <img src="assets/logo.png" width="100%" height="100%">
21
+ </p>
22
+
23
+ <font size=7><div align='center' > [[🍎 Project Page](https://video-mme-v2.netlify.app/)] [[📖 Paper](https://github.com/MME-Benchmarks/Video-MME-v2)] [[🤗 Dataset](https://huggingface.co/datasets/MME-Benchmarks/Video-MME-v2)] [[🏆 Leaderboard](https://video-mme-v2.netlify.app/#leaderboard)] </div></font>
24
+
25
+ ---
26
+
27
+ ### 🤗 About This Repo
28
+
29
+ This repository contains annotation data for "Video-MME-v2: Towards the Next Stage in Benchmarks
30
+ for Comprehensive Video Understanding". It mainly consists of three parts: `videos/`, `test.parquet`, and `subtitle.zip`.
31
+
32
+ - `videos/` contains **800 1080p MP4 files**, organized sequentially into 40 zip archives. For example, `001.mp4` to `020.mp4` are stored in `001.zip`.
33
+
34
+ - `test.parquet` contains **3200 QA instances**, with each video paired with **4 questions**. Each instance includes the **question**, **options**, **answer**, and auxiliary metadata such as the **video id** and **QA category**.
35
+
36
+ - `subtitle.zip` contains **800 JSONL files**, named `{001..800}.jsonl`, each corresponding to a video id. Each file contains word-level entries with timestamps.
37
+
38
+ <p align="center">
39
+ <img src="assets/teaser.pdf" width="100%" height="100%">
40
+ </p>
41
+
42
+ ---
43
+
44
+ ### 👀 Dataset Intro
45
+
46
+ In 2024, our [**Video-MME**](https://video-mme.github.io/) benchmark became a standard evaluation set for frontier models like Gemini and GPT. However, as model capabilities rapidly evolve, scores on existing benchmarks are saturating, yet a clear gap remains between **leaderboard performance and actual user experience**. This indicates that current evaluation paradigms fail to capture true video understanding abilities. To address this, we spent a year redesigning the evaluation system from first principles and now introduce **Video-MME v2**—a progressive and robust benchmark designed to drive the next generation of video understanding models.
47
+
48
+ - **Dataset Size**
49
+
50
+ The dataset consists of 800 videos and 3,200 QA pairs, with each video associated with four MCQ-based questions.
51
+
52
+ - **Multi-level Evaluation Hierarchy**
53
+
54
+ - 🔍 **Level 1:** Retrieval & Aggregation
55
+ - ⏱️ **Level 2:** Level 1 + Temporal Understanding
56
+ - 🧠 **Level 3:** Level 2 + Complex Reasoning.
57
+
58
+ - **Group-based Evaluation Strategy**
59
+
60
+ - **Capability consistency groups** examine the breadth of a specific fundamental perception skill.
61
+ - **Reasoning coherence groups** assess the depth of a model’s reasoning ability.
62
+
63
+ - **Video Sources**
64
+
65
+ All videos are collected from YouTube. Over 80% were published in 2025 or later, with nearly 40% published after October 2025.
66
+
67
+ - **Video Categories**
68
+
69
+ The dataset includes four top-level domains, further divided into 31 fine-grained subcategories.
70
+
71
+ - **Metrics**
72
+
73
+ A non-linear scoring mechanism is applied to all question groups, and a first error truncation mechanism is used for reasoning coherence groups.
74
+
75
+ ---
76
+
77
+ ### 🙌 Example
78
+
79
+ <video controls width="100%" src="assets/demo.mp4"></video>
80
+
81
+ **Q1:** Does the ball exist underneath any of the shells?
82
+ A. No.
83
+ B. Yes.
84
+ C. Cannot be determined.
85
+
86
+ **Q2:** Underneath which shell is the ball located at the end?
87
+ A. There is no ball under any shell.
88
+ B. The third shell.
89
+ C. The sixth shell.
90
+ D. The second shell.
91
+ E. The seventh shell.
92
+ F. The fifth shell.
93
+ G. The fourth shell.
94
+ H. The first shell.
95
+
96
+ **Q3:** The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located after the first swap?
97
+ A. There is no ball under any shell.
98
+ B. The seventh shell.
99
+ C. The fourth shell.
100
+ D. The fifth shell.
101
+ E. The sixth shell.
102
+ F. The second shell.
103
+ G. The third shell.
104
+ H. The first shell.
105
+
106
+ **Q4:** The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located initially?
107
+ A. The seventh shell.
108
+ B. The fourth shell.
109
+ C. The fifth shell.
110
+ D. The third shell.
111
+ E. The second shell.
112
+ F. There is no ball under any shell.
113
+ G. The first shell.
114
+ H. The sixth shell.