Datasets:
File size: 5,877 Bytes
817d168 ff85b25 9d04203 ff85b25 9256b34 427c70c 9256b34 427c70c 9256b34 a968b23 336d1f6 c321a62 336d1f6 a968b23 ff85b25 9d04203 ff85b25 f52003c ff85b25 fef54b4 ff85b25 fef54b4 ff85b25 fb6eea4 ff85b25 e4e8715 9d04203 e4e8715 ff85b25 9d04203 336d1f6 0cf2320 336d1f6 e13fccb 336d1f6 ff85b25 0cf2320 336d1f6 ff85b25 336d1f6 f5a89bc 336d1f6 9d04203 ff85b25 336d1f6 ff85b25 336d1f6 f5a89bc 336d1f6 9d04203 ff85b25 336d1f6 ff85b25 336d1f6 8c337bb 336d1f6 9d04203 336d1f6 ff85b25 336d1f6 ff85b25 336d1f6 8c337bb 336d1f6 9d04203 336d1f6 ff85b25 336d1f6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | ---
license: mit
task_categories:
- video-text-to-text
language:
- en
tags:
- benchmark
- video
- multimodal
- MCQ
pretty_name: Video-MME-v2
size_categories:
- 1K<n<10K
---
<p align="center">
<img src="assets/logo.png" width="100%" height="100%" alt="Video-MME-v2 logo">
</p>
<div align="center">
[](https://video-mme-v2-tmp.netlify.app)
[](https://arxiv.org/abs/2604.05015)
[](https://github.com/MME-Benchmarks/Video-MME-v2)
[](https://video-mme-v2-tmp.netlify.app/#leaderboard)
</div>
<!-- <p align="center">
<a href="https://video-mme-v2.netlify.app/">๐ Project Page</a> |
<a href="https://arxiv.org/abs/2604.05015">๐ Paper</a> |
<a href="https://huggingface.co/datasets/MME-Benchmarks/Video-MME-v2">๐ค Dataset</a> |
<a href="https://video-mme-v2.netlify.app/#leaderboard">๐ Leaderboard</a>
</p> -->
---
# ๐ค About This Repo
This repository contains annotation data for "[Video-MME-v2: Towards the Next Stage in Benchmarks for Comprehensive Video Understanding](https://arxiv.org/abs/2604.05015)". It mainly consists of three parts: `videos/`, `test.parquet`, and `subtitle.zip`.
- `videos/` contains **800 1080p MP4 files**, organized sequentially into 40 zip archives. For example, `001.mp4` to `020.mp4` are stored in `001.zip`.
- `test.parquet` contains **3200 QA instances**, with each video paired with **4 questions**. Each instance includes the **question**, **options**, **answer**, and auxiliary metadata such as the **video id** and **task type**.
- `subtitle.zip` contains **800 JSONL files**, each corresponding to a unique **video id**, with word-level entries and timestamps.
---
# ๐ฉท About This Benchmark
In 2024, our [**Video-MME**](https://video-mme.github.io/) benchmark became a standard evaluation set for frontier models like Gemini and GPT. However, as model capabilities rapidly evolve, scores on existing benchmarks are saturating, yet a clear gap remains between **leaderboard performance and actual user experience**. This indicates that current evaluation paradigms fail to capture true video understanding abilities. To address this, we spent a year redesigning the evaluation system from first principles and now introduce **Video-MME v2**โa progressive and robust benchmark designed to drive the next generation of video understanding models.
<p align="center">
<img src="assets/teaser.png" width="100%" height="100%" alt="Teaser">
</p>
- **Dataset Size**
The dataset consists of 800 videos and 3,200 QA pairs, with each video associated with four MCQ-based questions.
- **Multi-level Evaluation Hierarchy**
- ๐ **Level 1:** Retrieval & Aggregation
- โฑ๏ธ **Level 2:** Level 1 + Temporal Understanding
- ๐ง **Level 3:** Level 2 + Complex Reasoning.
- **Group-based Evaluation Strategy**
- **Capability consistency groups** examine the breadth of a specific fundamental perception skill.
- **Reasoning coherence groups** assess the depth of a modelโs reasoning ability.
- **Video Sources**
All videos are collected from YouTube. Over 80% were published in 2025 or later, with nearly 40% published after October 2025.
- **Video Categories**
The dataset includes four top-level domains, further divided into 31 fine-grained subcategories.
- **Metrics**
A non-linear scoring mechanism is applied to all question groups, and a first error truncation mechanism is used for reasoning coherence groups.
---
# ๐บ About a Concrete Case
> **๐ก Why this example matters?**
> This video QA group demonstrates our **Reasoning Coherence** evaluation strategy and **Multi-level Hierarchy**. To answer the final state correctly, a model must successfully track the object backwards through temporal swaps. If a model guesses the initial state correctly but fails the intermediate swaps, our **first error truncation mechanism** will accurately penalize it for flawed reasoning.
<p align="left">
<a href="https://huggingface.co/datasets/MME-Benchmarks/Video-MME-v2/resolve/main/assets/demo.mp4">
<img src="assets/demo_cover.png" width="45%" alt="Demo video cover"/>
</a>
</p>
<p align="left">
<strong>๐ Click the cover image to view the demo video.</strong>
</p>
<p>
<strong>Q1:</strong> Did the ball exist underneath any of the shells?<br>
A. No.<br>
B. Yes. โ
<br>
C. Cannot be determined.
</p>
<p>
<strong>Q2:</strong> Underneath which shell was the ball located at the end?<br>
A. There is no ball under any shell.<br>
B. The third shell.<br>
C. The sixth shell.<br>
D. The second shell.<br>
E. The seventh shell.<br>
F. The fifth shell.<br>
G. The fourth shell. โ
<br>
H. The first shell.
</p>
<p>
<strong>Q3:</strong> The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located after the first swap?<br>
A. There is no ball under any shell.<br>
B. The seventh shell.<br>
C. The fourth shell. โ
<br>
D. The fifth shell.<br>
E. The sixth shell.<br>
F. The second shell.<br>
G. The third shell.<br>
H. The first shell.
</p>
<p>
<strong>Q4:</strong> The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located initially?<br>
A. The seventh shell.<br>
B. The fourth shell.<br>
C. The fifth shell.<br>
D. The third shell. โ
<br>
E. The second shell.<br>
F. There is no ball under any shell.<br>
G. The first shell.<br>
H. The sixth shell.
</p>
|