Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -43,17 +43,7 @@ configs:
|
|
| 43 |
|
| 44 |
|
| 45 |
## 👀 MMR-V Overview
|
| 46 |
-
The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to 🕵️locate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (hereafter referred to as ``question frame'') and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**,
|
| 47 |
-
|
| 48 |
-
* *Long-range, multi-frame reasoning*: Models are required to infer and analyze evidence frames that may be far from the question frame.
|
| 49 |
-
|
| 50 |
-
* *Beyond perception*: Questions cannot be answered through direct perception alone but require reasoning over hidden information.
|
| 51 |
-
|
| 52 |
-
* *Reliability*: All tasks are manually annotated, referencing extensive real-world user understanding to align with common perceptions.
|
| 53 |
-
|
| 54 |
-
* *Confusability*: Carefully designed distractor annotation strategies to reduce model shortcuts.
|
| 55 |
-
|
| 56 |
-
MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**mine evidence across long-range multi-frame**.
|
| 57 |
|
| 58 |
## 🎬 MMR-V Task Examples
|
| 59 |
|
|
|
|
| 43 |
|
| 44 |
|
| 45 |
## 👀 MMR-V Overview
|
| 46 |
+
The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to 🕵️locate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (hereafter referred to as ``question frame'') and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**. MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**mine evidence across long-range multi-frame**.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
## 🎬 MMR-V Task Examples
|
| 49 |
|