Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,6 @@ configs:
|
|
| 40 |
</p>
|
| 41 |
|
| 42 |
|
| 43 |
-
This is the code repository of the video reasoning benchmark MMR-V
|
| 44 |
|
| 45 |
|
| 46 |
## 👀 MMR-V Overview
|
|
@@ -58,13 +57,13 @@ MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini
|
|
| 58 |
|
| 59 |
## 🎬 MMR-V Task Examples
|
| 60 |
|
| 61 |
-
<
|
| 62 |
<img src="./figs/data_example_intro_v4_5_16.png" width="100%" height="100%">
|
| 63 |
-
</p>
|
| 64 |
|
| 65 |
## 📚 Evaluation
|
| 66 |
|
| 67 |
-
1. Load the MMR-V
|
| 68 |
|
| 69 |
```shell
|
| 70 |
huggingface-cli download JokerJan/MMR-VBench --repo-type dataset --local-dir MMR-V --local-dir-use-symlinks False
|
|
@@ -76,46 +75,30 @@ cat videos.tar.part.* > videos.tar
|
|
| 76 |
tar -xvf videos.tar
|
| 77 |
```
|
| 78 |
|
| 79 |
-
3.
|
| 80 |
|
| 81 |
-
Please place the unzipped video file under `MMR-V/videos`.
|
| 82 |
-
|
| 83 |
-
Other model inference details and implementation can be found in `utils
|
| 84 |
-
/video_utils.py`.
|
| 85 |
-
|
| 86 |
-
5. Evaluation with script:
|
| 87 |
-
|
| 88 |
-
```shell
|
| 89 |
-
python evaluation/server_evaluation_on_MMR.py \
|
| 90 |
-
--model_name gemini-2.5-flash-preview-04-17 \
|
| 91 |
-
--api_url https://XXX/v1/chat/completions \
|
| 92 |
-
--api_key sk-XXX \
|
| 93 |
-
--with_cot \
|
| 94 |
-
--frame_count 32
|
| 95 |
-
```
|
| 96 |
-
Please provide valid API information at the `--api_url` and `--api_key` fields. For open-source models running on a local `vllm` server, set `--api_url` to the local server address and leave `--api_key` empty. If the `--with_cot` flag is specified, the evaluation will use *Chain-of-Thought (CoT) prompting*; otherwise, the model will default to *directly* outputting the final answer.
|
| 97 |
|
| 98 |
## 🎯 Experiment Results
|
| 99 |
|
| 100 |
### Main Results
|
| 101 |
|
| 102 |
-
<
|
| 103 |
<img src="./figs/main.png" width="80%" height="80%">
|
| 104 |
-
</p>
|
| 105 |
|
| 106 |
|
| 107 |
### Performance across Different Tasks
|
| 108 |
|
| 109 |
-
<
|
| 110 |
<img src="./figs/task_analysis_final.png" width="50%" height="50%">
|
| 111 |
-
</p>
|
| 112 |
|
| 113 |
|
| 114 |
|
| 115 |
## 🧠 Model Response Examples
|
| 116 |
|
| 117 |
The figure below presents example responses with Multimodal Chain-of-Thought (MCoT) from two reasoning models to a sample task from MMR-V. (Gemini's response omits part of the option analysis.) In the visualization, *yellow tokens represent reasoning and analysis based on textual information (e.g., the question and answer options), while green tokens indicate the model’s analysis of visual content from the video (including the question frame and evidence frames)*. It can be observed that **o4-mini** engages in deeper reasoning and analysis of the **video content**, ultimately arriving at the correct answer. In contrast, Gemini exhibits a more text-dominated reasoning strategy. This example highlights how MMR-V places greater emphasis on a model’s ability to incorporate visual information into the reasoning process and to mine multimodal cues effectively.
|
| 118 |
-
<
|
| 119 |
<img src="./figs/o4-compare_00.png" width="80%" height="80%">
|
| 120 |
-
</p>
|
| 121 |
The full video corresponding to this example can be found here: https://www.youtube.com/watch?v=g1NuAfkQ-Hw.
|
|
|
|
| 40 |
</p>
|
| 41 |
|
| 42 |
|
|
|
|
| 43 |
|
| 44 |
|
| 45 |
## 👀 MMR-V Overview
|
|
|
|
| 57 |
|
| 58 |
## 🎬 MMR-V Task Examples
|
| 59 |
|
| 60 |
+
<p align="center">
|
| 61 |
<img src="./figs/data_example_intro_v4_5_16.png" width="100%" height="100%">
|
| 62 |
+
</p>
|
| 63 |
|
| 64 |
## 📚 Evaluation
|
| 65 |
|
| 66 |
+
1. Load the MMR-V Videos
|
| 67 |
|
| 68 |
```shell
|
| 69 |
huggingface-cli download JokerJan/MMR-VBench --repo-type dataset --local-dir MMR-V --local-dir-use-symlinks False
|
|
|
|
| 75 |
tar -xvf videos.tar
|
| 76 |
```
|
| 77 |
|
| 78 |
+
3. Load MMR-V Benchmark:
|
| 79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
## 🎯 Experiment Results
|
| 82 |
|
| 83 |
### Main Results
|
| 84 |
|
| 85 |
+
<p align="center">
|
| 86 |
<img src="./figs/main.png" width="80%" height="80%">
|
| 87 |
+
</p>
|
| 88 |
|
| 89 |
|
| 90 |
### Performance across Different Tasks
|
| 91 |
|
| 92 |
+
<p align="center">
|
| 93 |
<img src="./figs/task_analysis_final.png" width="50%" height="50%">
|
| 94 |
+
</p>
|
| 95 |
|
| 96 |
|
| 97 |
|
| 98 |
## 🧠 Model Response Examples
|
| 99 |
|
| 100 |
The figure below presents example responses with Multimodal Chain-of-Thought (MCoT) from two reasoning models to a sample task from MMR-V. (Gemini's response omits part of the option analysis.) In the visualization, *yellow tokens represent reasoning and analysis based on textual information (e.g., the question and answer options), while green tokens indicate the model’s analysis of visual content from the video (including the question frame and evidence frames)*. It can be observed that **o4-mini** engages in deeper reasoning and analysis of the **video content**, ultimately arriving at the correct answer. In contrast, Gemini exhibits a more text-dominated reasoning strategy. This example highlights how MMR-V places greater emphasis on a model’s ability to incorporate visual information into the reasoning process and to mine multimodal cues effectively.
|
| 101 |
+
<p align="center">
|
| 102 |
<img src="./figs/o4-compare_00.png" width="80%" height="80%">
|
| 103 |
+
</p>
|
| 104 |
The full video corresponding to this example can be found here: https://www.youtube.com/watch?v=g1NuAfkQ-Hw.
|