Video-MME-v2 / README.md
EliYuan00's picture
Update README.md
f52003c verified
---
license: mit
task_categories:
- video-text-to-text
language:
- en
tags:
- benchmark
- video
- multimodal
- MCQ
pretty_name: Video-MME-v2
size_categories:
- 1K<n<10K
---
<p align="center">
<img src="assets/logo.png" width="100%" height="100%" alt="Video-MME-v2 logo">
</p>
<div align="center">
[![Project](https://img.shields.io/badge/๐ŸŽ_Project-Video--MME--v2-EA86BB)](https://video-mme-v2-tmp.netlify.app)
[![Paper](https://img.shields.io/badge/cs.CV-arXiv%3A2604.05015-b31b1b?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2604.05015)
[![GitHub](https://img.shields.io/badge/Github-Video--MME--v2-1D4ED8?logo=github&logoColor=white)](https://github.com/MME-Benchmarks/Video-MME-v2)
[![Leaderboard](https://img.shields.io/badge/๐Ÿ†_Leaderboard-Rank-ffb703)](https://video-mme-v2-tmp.netlify.app/#leaderboard)
</div>
<!-- <p align="center">
<a href="https://video-mme-v2.netlify.app/">๐ŸŽ Project Page</a> |
<a href="https://arxiv.org/abs/2604.05015">๐Ÿ“– Paper</a> |
<a href="https://huggingface.co/datasets/MME-Benchmarks/Video-MME-v2">๐Ÿค— Dataset</a> |
<a href="https://video-mme-v2.netlify.app/#leaderboard">๐Ÿ† Leaderboard</a>
</p> -->
---
# ๐Ÿค— About This Repo
This repository contains annotation data for "[Video-MME-v2: Towards the Next Stage in Benchmarks for Comprehensive Video Understanding](https://arxiv.org/abs/2604.05015)". It mainly consists of three parts: `videos/`, `test.parquet`, and `subtitle.zip`.
- `videos/` contains **800 1080p MP4 files**, organized sequentially into 40 zip archives. For example, `001.mp4` to `020.mp4` are stored in `001.zip`.
- `test.parquet` contains **3200 QA instances**, with each video paired with **4 questions**. Each instance includes the **question**, **options**, **answer**, and auxiliary metadata such as the **video id** and **task type**.
- `subtitle.zip` contains **800 JSONL files**, each corresponding to a unique **video id**, with word-level entries and timestamps.
---
# ๐Ÿฉท About This Benchmark
In 2024, our [**Video-MME**](https://video-mme.github.io/) benchmark became a standard evaluation set for frontier models like Gemini and GPT. However, as model capabilities rapidly evolve, scores on existing benchmarks are saturating, yet a clear gap remains between **leaderboard performance and actual user experience**. This indicates that current evaluation paradigms fail to capture true video understanding abilities. To address this, we spent a year redesigning the evaluation system from first principles and now introduce **Video-MME v2**โ€”a progressive and robust benchmark designed to drive the next generation of video understanding models.
<p align="center">
<img src="assets/teaser.png" width="100%" height="100%" alt="Teaser">
</p>
- **Dataset Size**
The dataset consists of 800 videos and 3,200 QA pairs, with each video associated with four MCQ-based questions.
- **Multi-level Evaluation Hierarchy**
- ๐Ÿ” **Level 1:** Retrieval & Aggregation
- โฑ๏ธ **Level 2:** Level 1 + Temporal Understanding
- ๐Ÿง  **Level 3:** Level 2 + Complex Reasoning.
- **Group-based Evaluation Strategy**
- **Capability consistency groups** examine the breadth of a specific fundamental perception skill.
- **Reasoning coherence groups** assess the depth of a modelโ€™s reasoning ability.
- **Video Sources**
All videos are collected from YouTube. Over 80% were published in 2025 or later, with nearly 40% published after October 2025.
- **Video Categories**
The dataset includes four top-level domains, further divided into 31 fine-grained subcategories.
- **Metrics**
A non-linear scoring mechanism is applied to all question groups, and a first error truncation mechanism is used for reasoning coherence groups.
---
# ๐Ÿบ About a Concrete Case
> **๐Ÿ’ก Why this example matters?**
> This video QA group demonstrates our **Reasoning Coherence** evaluation strategy and **Multi-level Hierarchy**. To answer the final state correctly, a model must successfully track the object backwards through temporal swaps. If a model guesses the initial state correctly but fails the intermediate swaps, our **first error truncation mechanism** will accurately penalize it for flawed reasoning.
<p align="left">
<a href="https://huggingface.co/datasets/MME-Benchmarks/Video-MME-v2/resolve/main/assets/demo.mp4">
<img src="assets/demo_cover.png" width="45%" alt="Demo video cover"/>
</a>
</p>
<p align="left">
<strong>๐Ÿ‘† Click the cover image to view the demo video.</strong>
</p>
<p>
<strong>Q1:</strong> Did the ball exist underneath any of the shells?<br>
A. No.<br>
B. Yes. โœ…<br>
C. Cannot be determined.
</p>
<p>
<strong>Q2:</strong> Underneath which shell was the ball located at the end?<br>
A. There is no ball under any shell.<br>
B. The third shell.<br>
C. The sixth shell.<br>
D. The second shell.<br>
E. The seventh shell.<br>
F. The fifth shell.<br>
G. The fourth shell. โœ…<br>
H. The first shell.
</p>
<p>
<strong>Q3:</strong> The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located after the first swap?<br>
A. There is no ball under any shell.<br>
B. The seventh shell.<br>
C. The fourth shell. โœ…<br>
D. The fifth shell.<br>
E. The sixth shell.<br>
F. The second shell.<br>
G. The third shell.<br>
H. The first shell.
</p>
<p>
<strong>Q4:</strong> The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located initially?<br>
A. The seventh shell.<br>
B. The fourth shell.<br>
C. The fifth shell.<br>
D. The third shell. โœ…<br>
E. The second shell.<br>
F. There is no ball under any shell.<br>
G. The first shell.<br>
H. The sixth shell.
</p>