xunshuhang commited on
Commit
eb7f035
verified
1 Parent(s): 559f804

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,14 +15,14 @@ size_categories:
15
 
16
 
17
  ## 馃敟 News
18
- * **`2025.05.03`** 馃専 We are happy to release the RTV-Bench. You can find the RTV-Bench from [![hf_checkpoint](https://img.shields.io/badge/馃-RTV--Bench-9C276A.svg)](https://huggingface.co/datasets/xunsh/RTV-Bench) or [![ms_checkpoint](https://img.shields.io/badge/馃-RTV--Bench-8A2BE2.svg)](https://www.modelscope.cn/datasets/Jungang/RTV-Bench).
19
-
20
  ## TODO
21
  - [ ] Release the final label json.
22
  - [ ] Release the evaluation code.
23
  - [ ] Construct a more comprehensive benchmark for real-time video analysis.
24
  - [ ] 路路路
25
- ## 馃憖 $\mathcal{RTV}\text{-}Bench$ Overview
26
 
27
  We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
28
  * **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
 
15
 
16
 
17
  ## 馃敟 News
18
+ * **`2025.05.03`** 馃専 We are happy to release the RTV-Bench.
19
+
20
  ## TODO
21
  - [ ] Release the final label json.
22
  - [ ] Release the evaluation code.
23
  - [ ] Construct a more comprehensive benchmark for real-time video analysis.
24
  - [ ] 路路路
25
+ ## 馃憖 RTV-Bench Overview
26
 
27
  We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
28
  * **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;