RTVBench commited on
Commit
f6c0899
Β·
verified Β·
1 Parent(s): 250cc32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -7,14 +7,14 @@ language:
7
  size_categories:
8
  - 1K<n<10K
9
  ---
10
- <div align="center">
11
  <h1>RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video</h1>
12
- </div>
13
 
14
- [![hf_checkpoint](https://img.shields.io/badge/πŸ€—-RTV--Bench-9C276A.svg)](https://huggingface.co/datasets/xunsh/RTV-Bench)
15
 
16
  <!-- [![ms_checkpoint](https://img.shields.io/badge/πŸ€–-RTV--Bench-8A2BE2.svg)](https://www.modelscope.cn/datasets/Jungang/RTV-Bench) -->
17
- ## πŸ”₯ News
18
  * **`2025.05.03`** 🌟 We are happy to release the RTV-Bench.
19
 
20
  ## TODO
@@ -27,7 +27,7 @@ size_categories:
27
  We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
28
  * **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
29
  * **Hierarchical Question Structure**, combining basic and advanced queries; and
30
- * **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning.
31
 
32
 
33
  <!-- ## 🌟 Star History
 
7
  size_categories:
8
  - 1K<n<10K
9
  ---
10
+ <!-- <div align="center">
11
  <h1>RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video</h1>
12
+ </div> -->
13
 
14
+ <!-- [![hf_checkpoint](https://img.shields.io/badge/πŸ€—-RTV--Bench-9C276A.svg)](https://huggingface.co/datasets/xunsh/RTV-Bench) -->
15
 
16
  <!-- [![ms_checkpoint](https://img.shields.io/badge/πŸ€–-RTV--Bench-8A2BE2.svg)](https://www.modelscope.cn/datasets/Jungang/RTV-Bench) -->
17
+ <!-- ## πŸ”₯ News
18
  * **`2025.05.03`** 🌟 We are happy to release the RTV-Bench.
19
 
20
  ## TODO
 
27
  We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
28
  * **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
29
  * **Hierarchical Question Structure**, combining basic and advanced queries; and
30
+ * **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning. -->
31
 
32
 
33
  <!-- ## 🌟 Star History