Datasets:

Modalities:
Text
Video
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
zl2048 commited on
Commit
d780efb
·
verified ·
1 Parent(s): a650350

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -9
README.md CHANGED
@@ -10,6 +10,7 @@ configs:
10
  data_files: train.json
11
  ---
12
 
 
13
  # FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding
14
 
15
 
@@ -21,8 +22,7 @@ configs:
21
  <img src="./docs/image1.png" width="96%" height="50%">
22
  </p>
23
 
24
-
25
- MotionBench aims to guide and motivate the development of more capable video understanding models, emphasizing the importance of fine-grained motion comprehension.
26
 
27
  ---
28
 
@@ -34,12 +34,7 @@ MotionBench aims to guide and motivate the development of more capable video und
34
 
35
  Multimodal Large Language Models (MLLMs) have shown remarkable capabilities in video content understanding but still struggle with fine-grained motion comprehension. To comprehensively assess the motion understanding ability of existing MLLMs, we introduce FAVOR-Bench, comprising 1,776 videos with structured manual annotations of various motions. Our benchmark includes both close-ended and open-ended tasks. For close-ended evaluation, we carefully design 8,184 multiple-choice question-answer pairs spanning six distinct sub-tasks. For open-ended evaluation, we develop both a novel cost-efficient LLM-free and a GPT-assisted caption assessment method, where the former can enhance benchmarking interpretability and reproducibility. Comprehensive experiments with 21 state-of-the-art MLLMs reveal significant limitations in their ability to comprehend and describe detailed temporal dynamics in video motions. To alleviate this limitation, we further build FAVOR-Train, a dataset consisting of 17,279 videos with fine-grained motion annotations. The results of finetuning Qwen2.5-VL on FAVOR-Train yield consistent improvements on motion-related tasks of TVBench, MotionBench and our FAVOR-Bench. Comprehensive assessment results demonstrate that the proposed FAVOR-Bench and FAVOR-Train provide valuable tools to the community for developing more powerful video understanding models.
36
 
37
- ### Features
38
-
39
- 1. **Core Capabilities**: Six core capabilities for fine-grained motion understanding, enabling the evaluation of motion-level perception.
40
- 2. **Diverse Data**: MotionBench collects diverse video from the web, public datasets, and self-synthetic videos generated via Unity3, capturing a broad distribution of real-world
41
- application.
42
- 3. **High-Quality Annotations**: Reliable benchmark with meticulous human annotation and multi-stage quality control processes.
43
 
44
  <p align="center">
45
  <img src="./docs/image2.png" width="50%" height="20%">
@@ -53,7 +48,7 @@ Our dataset is under the CC-BY-NC-SA-4.0 license.
53
 
54
  LVBench is only used for academic research. Commercial use in any form is prohibited. We do not own the copyright of any raw video files.
55
 
56
- If there is any infringement in MotionBench, please contact shiyu.huang@aminer.cn or directly raise an issue, and we will remove it immediately.
57
 
58
  ### Download
59
 
 
10
  data_files: train.json
11
  ---
12
 
13
+ <div align="center">
14
  # FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding
15
 
16
 
 
22
  <img src="./docs/image1.png" width="96%" height="50%">
23
  </p>
24
 
25
+ </div>
 
26
 
27
  ---
28
 
 
34
 
35
  Multimodal Large Language Models (MLLMs) have shown remarkable capabilities in video content understanding but still struggle with fine-grained motion comprehension. To comprehensively assess the motion understanding ability of existing MLLMs, we introduce FAVOR-Bench, comprising 1,776 videos with structured manual annotations of various motions. Our benchmark includes both close-ended and open-ended tasks. For close-ended evaluation, we carefully design 8,184 multiple-choice question-answer pairs spanning six distinct sub-tasks. For open-ended evaluation, we develop both a novel cost-efficient LLM-free and a GPT-assisted caption assessment method, where the former can enhance benchmarking interpretability and reproducibility. Comprehensive experiments with 21 state-of-the-art MLLMs reveal significant limitations in their ability to comprehend and describe detailed temporal dynamics in video motions. To alleviate this limitation, we further build FAVOR-Train, a dataset consisting of 17,279 videos with fine-grained motion annotations. The results of finetuning Qwen2.5-VL on FAVOR-Train yield consistent improvements on motion-related tasks of TVBench, MotionBench and our FAVOR-Bench. Comprehensive assessment results demonstrate that the proposed FAVOR-Bench and FAVOR-Train provide valuable tools to the community for developing more powerful video understanding models.
36
 
37
+ ### Evaluation Tasks
 
 
 
 
 
38
 
39
  <p align="center">
40
  <img src="./docs/image2.png" width="50%" height="20%">
 
48
 
49
  LVBench is only used for academic research. Commercial use in any form is prohibited. We do not own the copyright of any raw video files.
50
 
51
+ If there is any infringement in FAVOR-Bench, please contact zhangl22@m.fudan.edu.cn or directly raise an issue, and we will remove it immediately.
52
 
53
  ### Download
54