JokerJan commited on
Commit
c521ddc
·
verified ·
1 Parent(s): 4b4b16e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -31,7 +31,7 @@ configs:
31
  ---
32
 
33
 
34
- # <img src="./figs/LOGO_v3.png" alt="MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos" width="5%"> MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos
35
 
36
 
37
  <p align="center">
@@ -58,7 +58,7 @@ MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini
58
  ## 🎬 MMR-V Task Examples
59
 
60
  <p align="center">
61
- <img src="./figs/data_example_intro_v4_5_16.png" width="100%" height="100%">
62
  </p>
63
 
64
  ## 📚 Evaluation
@@ -90,7 +90,7 @@ tar -xvf videos.tar
90
  ### Performance across Different Tasks
91
 
92
  <p align="center">
93
- <img src="./figs/task_analysis_final.png" width="50%" height="50%">
94
  </p>
95
 
96
 
@@ -99,6 +99,6 @@ tar -xvf videos.tar
99
 
100
  The figure below presents example responses with Multimodal Chain-of-Thought (MCoT) from two reasoning models to a sample task from MMR-V. (Gemini's response omits part of the option analysis.) In the visualization, *yellow tokens represent reasoning and analysis based on textual information (e.g., the question and answer options), while green tokens indicate the model’s analysis of visual content from the video (including the question frame and evidence frames)*. It can be observed that **o4-mini** engages in deeper reasoning and analysis of the **video content**, ultimately arriving at the correct answer. In contrast, Gemini exhibits a more text-dominated reasoning strategy. This example highlights how MMR-V places greater emphasis on a model’s ability to incorporate visual information into the reasoning process and to mine multimodal cues effectively.
101
  <p align="center">
102
- <img src="./figs/o4-compare_00.png" width="80%" height="80%">
103
  </p>
104
  The full video corresponding to this example can be found here: https://www.youtube.com/watch?v=g1NuAfkQ-Hw.
 
31
  ---
32
 
33
 
34
+ # <img src="./figs/LOGO_v3.png" alt="MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos" width="5%">MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos
35
 
36
 
37
  <p align="center">
 
58
  ## 🎬 MMR-V Task Examples
59
 
60
  <p align="center">
61
+ <img src="./figs/data_example_intro_v4_5_16.png" width="80%" height="80%">
62
  </p>
63
 
64
  ## 📚 Evaluation
 
90
  ### Performance across Different Tasks
91
 
92
  <p align="center">
93
+ <img src="./figs/task_analysis_final.png" width="30%" height="30%">
94
  </p>
95
 
96
 
 
99
 
100
  The figure below presents example responses with Multimodal Chain-of-Thought (MCoT) from two reasoning models to a sample task from MMR-V. (Gemini's response omits part of the option analysis.) In the visualization, *yellow tokens represent reasoning and analysis based on textual information (e.g., the question and answer options), while green tokens indicate the model’s analysis of visual content from the video (including the question frame and evidence frames)*. It can be observed that **o4-mini** engages in deeper reasoning and analysis of the **video content**, ultimately arriving at the correct answer. In contrast, Gemini exhibits a more text-dominated reasoning strategy. This example highlights how MMR-V places greater emphasis on a model’s ability to incorporate visual information into the reasoning process and to mine multimodal cues effectively.
101
  <p align="center">
102
+ <img src="./figs/o4-compare_00.png" width="50%" height="50%">
103
  </p>
104
  The full video corresponding to this example can be found here: https://www.youtube.com/watch?v=g1NuAfkQ-Hw.