JokerJan commited on
Commit
1dd8d94
·
verified ·
1 Parent(s): 3ac427e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md CHANGED
@@ -29,3 +29,93 @@ configs:
29
  - split: test
30
  path: data/test-*
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - split: test
30
  path: data/test-*
31
  ---
32
+
33
+
34
+ # <img src="./figs/LOGO_v3.png" alt="MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos" width="5%"> MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos
35
+
36
+
37
+ <p align="center">
38
+ <a href="https://huggingface.co/datasets/JokerJan/MMR-VBench"> 🤗 Benchmark</a></a> |
39
+ <a href="https://github.com/GaryStack/MMR-V"> 🏠 Homepage (Coming Soon!)</a>
40
+ </p>
41
+
42
+
43
+ This is the code repository of the video reasoning benchmark MMR-V
44
+
45
+
46
+ ## 👀 MMR-V Overview
47
+ The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to 🕵️locate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (hereafter referred to as ``question frame'') and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**, which is characterized by the following features.
48
+
49
+ * *Long-range, multi-frame reasoning*: Models are required to infer and analyze evidence frames that may be far from the question frame.
50
+
51
+ * *Beyond perception*: Questions cannot be answered through direct perception alone but require reasoning over hidden information.
52
+
53
+ * *Reliability*: All tasks are manually annotated, referencing extensive real-world user understanding to align with common perceptions.
54
+
55
+ * *Confusability*: Carefully designed distractor annotation strategies to reduce model shortcuts.
56
+
57
+ MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**mine evidence across long-range multi-frame**.
58
+
59
+ ## 🎬 MMR-V Task Examples
60
+
61
+ <!-- <p align="center">
62
+ <img src="./figs/data_example_intro_v4_5_16.png" width="100%" height="100%">
63
+ </p> -->
64
+
65
+ ## 📚 Evaluation
66
+
67
+ 1. Load the MMR-V Benchmark
68
+
69
+ ```shell
70
+ huggingface-cli download JokerJan/MMR-VBench --repo-type dataset --local-dir MMR-V --local-dir-use-symlinks False
71
+ ```
72
+ 2. Extract videos from the `.tar` files:
73
+
74
+ ```shell
75
+ cat videos.tar.part.* > videos.tar
76
+ tar -xvf videos.tar
77
+ ```
78
+
79
+ 3. Evaluation Settings:
80
+
81
+ Please place the unzipped video file under `MMR-V/videos`.
82
+
83
+ Other model inference details and implementation can be found in `utils
84
+ /video_utils.py`.
85
+
86
+ 5. Evaluation with script:
87
+
88
+ ```shell
89
+ python evaluation/server_evaluation_on_MMR.py \
90
+ --model_name gemini-2.5-flash-preview-04-17 \
91
+ --api_url https://XXX/v1/chat/completions \
92
+ --api_key sk-XXX \
93
+ --with_cot \
94
+ --frame_count 32
95
+ ```
96
+ Please provide valid API information at the `--api_url` and `--api_key` fields. For open-source models running on a local `vllm` server, set `--api_url` to the local server address and leave `--api_key` empty. If the `--with_cot` flag is specified, the evaluation will use *Chain-of-Thought (CoT) prompting*; otherwise, the model will default to *directly* outputting the final answer.
97
+
98
+ ## 🎯 Experiment Results
99
+
100
+ ### Main Results
101
+
102
+ <!-- <p align="center">
103
+ <img src="./figs/main.png" width="80%" height="80%">
104
+ </p> -->
105
+
106
+
107
+ ### Performance across Different Tasks
108
+
109
+ <!-- <p align="center">
110
+ <img src="./figs/task_analysis_final.png" width="50%" height="50%">
111
+ </p> -->
112
+
113
+
114
+
115
+ ## 🧠 Model Response Examples
116
+
117
+ The figure below presents example responses with Multimodal Chain-of-Thought (MCoT) from two reasoning models to a sample task from MMR-V. (Gemini's response omits part of the option analysis.) In the visualization, *yellow tokens represent reasoning and analysis based on textual information (e.g., the question and answer options), while green tokens indicate the model’s analysis of visual content from the video (including the question frame and evidence frames)*. It can be observed that **o4-mini** engages in deeper reasoning and analysis of the **video content**, ultimately arriving at the correct answer. In contrast, Gemini exhibits a more text-dominated reasoning strategy. This example highlights how MMR-V places greater emphasis on a model’s ability to incorporate visual information into the reasoning process and to mine multimodal cues effectively.
118
+ <!-- <p align="center">
119
+ <img src="./figs/o4-compare_00.png" width="80%" height="80%">
120
+ </p> -->
121
+ The full video corresponding to this example can be found here: https://www.youtube.com/watch?v=g1NuAfkQ-Hw.