Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,4 @@
|
|
| 1 |
-
---
|
| 2 |
license: mit
|
| 3 |
task_categories:
|
| 4 |
- video-text-to-text
|
|
@@ -27,7 +27,7 @@ size_categories:
|
|
| 27 |
We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
|
| 28 |
* **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
|
| 29 |
* **Hierarchical Question Structure**, combining basic and advanced queries; and
|
| 30 |
-
* **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning.
|
| 31 |
|
| 32 |
|
| 33 |
<!-- ## 🌟 Star History
|
|
|
|
| 1 |
+
<!-- ---
|
| 2 |
license: mit
|
| 3 |
task_categories:
|
| 4 |
- video-text-to-text
|
|
|
|
| 27 |
We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
|
| 28 |
* **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
|
| 29 |
* **Hierarchical Question Structure**, combining basic and advanced queries; and
|
| 30 |
+
* **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning. -->
|
| 31 |
|
| 32 |
|
| 33 |
<!-- ## 🌟 Star History
|