Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,14 +7,14 @@ language:
|
|
| 7 |
size_categories:
|
| 8 |
- 1K<n<10K
|
| 9 |
---
|
| 10 |
-
<div align="center">
|
| 11 |
<h1>RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video</h1>
|
| 12 |
-
</div>
|
| 13 |
|
| 14 |
-
[](https://huggingface.co/datasets/xunsh/RTV-Bench)
|
| 15 |
|
| 16 |
<!-- [](https://www.modelscope.cn/datasets/Jungang/RTV-Bench) -->
|
| 17 |
-
## π₯ News
|
| 18 |
* **`2025.05.03`** π We are happy to release the RTV-Bench.
|
| 19 |
|
| 20 |
## TODO
|
|
@@ -27,7 +27,7 @@ size_categories:
|
|
| 27 |
We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
|
| 28 |
* **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
|
| 29 |
* **Hierarchical Question Structure**, combining basic and advanced queries; and
|
| 30 |
-
* **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning.
|
| 31 |
|
| 32 |
|
| 33 |
<!-- ## π Star History
|
|
|
|
| 7 |
size_categories:
|
| 8 |
- 1K<n<10K
|
| 9 |
---
|
| 10 |
+
<!-- <div align="center">
|
| 11 |
<h1>RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video</h1>
|
| 12 |
+
</div> -->
|
| 13 |
|
| 14 |
+
<!-- [](https://huggingface.co/datasets/xunsh/RTV-Bench) -->
|
| 15 |
|
| 16 |
<!-- [](https://www.modelscope.cn/datasets/Jungang/RTV-Bench) -->
|
| 17 |
+
<!-- ## π₯ News
|
| 18 |
* **`2025.05.03`** π We are happy to release the RTV-Bench.
|
| 19 |
|
| 20 |
## TODO
|
|
|
|
| 27 |
We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
|
| 28 |
* **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
|
| 29 |
* **Hierarchical Question Structure**, combining basic and advanced queries; and
|
| 30 |
+
* **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning. -->
|
| 31 |
|
| 32 |
|
| 33 |
<!-- ## π Star History
|