JokerJan commited on
Commit
f64fa8f
·
verified ·
1 Parent(s): df2cec0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -36,7 +36,7 @@ task_categories:
36
  <img src="./figs/LOGO_v3.png" width="30%" height="30%">
37
  </p>
38
 
39
- # MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos
40
 
41
 
42
  <p align="center">
@@ -48,8 +48,8 @@ task_categories:
48
 
49
 
50
 
51
- ## 👀 MMR-V Data Card
52
- The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to 🕵️locate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (referred to as "question frame") and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**. MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**mine evidence across long-range multi-frame**.
53
 
54
  ## 🎬 MMR-V Task Examples
55
 
 
36
  <img src="./figs/LOGO_v3.png" width="30%" height="30%">
37
  </p>
38
 
39
+ # MMR-V: *What's Left Unsaid?* A Benchmark for Multimodal Deep Reasoning in Videos ("Think with Videos")
40
 
41
 
42
  <p align="center">
 
48
 
49
 
50
 
51
+ ## 👀 MMR-V Data Card ("Think with Video")
52
+ The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to 🕵️locate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (referred to as "question frame") and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**. MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**think with video and mine evidence across long-range multi-frame**.
53
 
54
  ## 🎬 MMR-V Task Examples
55