Datasets:
xunshuhang
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -15,14 +15,14 @@ size_categories:
|
|
| 15 |
|
| 16 |
|
| 17 |
## 馃敟 News
|
| 18 |
-
* **`2025.05.03`** 馃専 We are happy to release the RTV-Bench.
|
| 19 |
-
|
| 20 |
## TODO
|
| 21 |
- [ ] Release the final label json.
|
| 22 |
- [ ] Release the evaluation code.
|
| 23 |
- [ ] Construct a more comprehensive benchmark for real-time video analysis.
|
| 24 |
- [ ] 路路路
|
| 25 |
-
## 馃憖
|
| 26 |
|
| 27 |
We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
|
| 28 |
* **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
|
|
|
|
| 15 |
|
| 16 |
|
| 17 |
## 馃敟 News
|
| 18 |
+
* **`2025.05.03`** 馃専 We are happy to release the RTV-Bench.
|
| 19 |
+
|
| 20 |
## TODO
|
| 21 |
- [ ] Release the final label json.
|
| 22 |
- [ ] Release the evaluation code.
|
| 23 |
- [ ] Construct a more comprehensive benchmark for real-time video analysis.
|
| 24 |
- [ ] 路路路
|
| 25 |
+
## 馃憖 RTV-Bench Overview
|
| 26 |
|
| 27 |
We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs. $\mathcal{RTV}\text{-}Bench$ includes three key principles:
|
| 28 |
* **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes;
|