| --- |
| license: mit |
| task_categories: |
| - video-text-to-text |
| language: |
| - en |
| tags: |
| - benchmark |
| - video |
| - multimodal |
| - MCQ |
| pretty_name: Video-MME-v2 |
| size_categories: |
| - 1K<n<10K |
| --- |
| |
| <p align="center"> |
| <img src="assets/logo.png" width="100%" height="100%" alt="Video-MME-v2 logo"> |
| </p> |
| |
| <div align="center"> |
|
|
| [](https://video-mme-v2-tmp.netlify.app) |
| [](https://arxiv.org/abs/2604.05015) |
| [](https://github.com/MME-Benchmarks/Video-MME-v2) |
| [](https://video-mme-v2-tmp.netlify.app/#leaderboard) |
|
|
| </div> |
|
|
| <!-- <p align="center"> |
| <a href="https://video-mme-v2.netlify.app/">๐ Project Page</a> | |
| <a href="https://arxiv.org/abs/2604.05015">๐ Paper</a> | |
| <a href="https://huggingface.co/datasets/MME-Benchmarks/Video-MME-v2">๐ค Dataset</a> | |
| <a href="https://video-mme-v2.netlify.app/#leaderboard">๐ Leaderboard</a> |
| </p> --> |
|
|
| --- |
|
|
| # ๐ค About This Repo |
|
|
| This repository contains annotation data for "[Video-MME-v2: Towards the Next Stage in Benchmarks for Comprehensive Video Understanding](https://arxiv.org/abs/2604.05015)". It mainly consists of three parts: `videos/`, `test.parquet`, and `subtitle.zip`. |
|
|
| - `videos/` contains **800 1080p MP4 files**, organized sequentially into 40 zip archives. For example, `001.mp4` to `020.mp4` are stored in `001.zip`. |
|
|
| - `test.parquet` contains **3200 QA instances**, with each video paired with **4 questions**. Each instance includes the **question**, **options**, **answer**, and auxiliary metadata such as the **video id** and **task type**. |
|
|
| - `subtitle.zip` contains **800 JSONL files**, each corresponding to a unique **video id**, with word-level entries and timestamps. |
|
|
| --- |
|
|
| # ๐ฉท About This Benchmark |
|
|
| In 2024, our [**Video-MME**](https://video-mme.github.io/) benchmark became a standard evaluation set for frontier models like Gemini and GPT. However, as model capabilities rapidly evolve, scores on existing benchmarks are saturating, yet a clear gap remains between **leaderboard performance and actual user experience**. This indicates that current evaluation paradigms fail to capture true video understanding abilities. To address this, we spent a year redesigning the evaluation system from first principles and now introduce **Video-MME v2**โa progressive and robust benchmark designed to drive the next generation of video understanding models. |
|
|
| <p align="center"> |
| <img src="assets/teaser.png" width="100%" height="100%" alt="Teaser"> |
| </p> |
| |
| - **Dataset Size** |
|
|
| The dataset consists of 800 videos and 3,200 QA pairs, with each video associated with four MCQ-based questions. |
|
|
| - **Multi-level Evaluation Hierarchy** |
|
|
| - ๐ **Level 1:** Retrieval & Aggregation |
| - โฑ๏ธ **Level 2:** Level 1 + Temporal Understanding |
| - ๐ง **Level 3:** Level 2 + Complex Reasoning. |
|
|
| - **Group-based Evaluation Strategy** |
|
|
| - **Capability consistency groups** examine the breadth of a specific fundamental perception skill. |
| - **Reasoning coherence groups** assess the depth of a modelโs reasoning ability. |
|
|
| - **Video Sources** |
|
|
| All videos are collected from YouTube. Over 80% were published in 2025 or later, with nearly 40% published after October 2025. |
|
|
| - **Video Categories** |
|
|
| The dataset includes four top-level domains, further divided into 31 fine-grained subcategories. |
|
|
| - **Metrics** |
|
|
| A non-linear scoring mechanism is applied to all question groups, and a first error truncation mechanism is used for reasoning coherence groups. |
|
|
| --- |
|
|
| # ๐บ About a Concrete Case |
|
|
| > **๐ก Why this example matters?** |
| > This video QA group demonstrates our **Reasoning Coherence** evaluation strategy and **Multi-level Hierarchy**. To answer the final state correctly, a model must successfully track the object backwards through temporal swaps. If a model guesses the initial state correctly but fails the intermediate swaps, our **first error truncation mechanism** will accurately penalize it for flawed reasoning. |
|
|
| <p align="left"> |
| <a href="https://huggingface.co/datasets/MME-Benchmarks/Video-MME-v2/resolve/main/assets/demo.mp4"> |
| <img src="assets/demo_cover.png" width="45%" alt="Demo video cover"/> |
| </a> |
| </p> |
| |
| <p align="left"> |
| <strong>๐ Click the cover image to view the demo video.</strong> |
| </p> |
|
|
| <p> |
| <strong>Q1:</strong> Did the ball exist underneath any of the shells?<br> |
| A. No.<br> |
| B. Yes. โ
<br> |
| C. Cannot be determined. |
| </p> |
|
|
| <p> |
| <strong>Q2:</strong> Underneath which shell was the ball located at the end?<br> |
| A. There is no ball under any shell.<br> |
| B. The third shell.<br> |
| C. The sixth shell.<br> |
| D. The second shell.<br> |
| E. The seventh shell.<br> |
| F. The fifth shell.<br> |
| G. The fourth shell. โ
<br> |
| H. The first shell. |
| </p> |
|
|
| <p> |
| <strong>Q3:</strong> The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located after the first swap?<br> |
| A. There is no ball under any shell.<br> |
| B. The seventh shell.<br> |
| C. The fourth shell. โ
<br> |
| D. The fifth shell.<br> |
| E. The sixth shell.<br> |
| F. The second shell.<br> |
| G. The third shell.<br> |
| H. The first shell. |
| </p> |
|
|
| <p> |
| <strong>Q4:</strong> The host performed a total of two shell swaps (defining a single swap as an instance where all shells return to an approximately straight line). Underneath which shell was the ball located initially?<br> |
| A. The seventh shell.<br> |
| B. The fourth shell.<br> |
| C. The fifth shell.<br> |
| D. The third shell. โ
<br> |
| E. The second shell.<br> |
| F. There is no ball under any shell.<br> |
| G. The first shell.<br> |
| H. The sixth shell. |
| </p> |
|
|