File size: 2,268 Bytes
57e9a02
559f804
 
 
 
 
 
 
 
f6c0899
559f804
f6c0899
559f804
f6c0899
559f804
250cc32
f6c0899
eb7f035
 
559f804
 
 
 
 
eb7f035
559f804
 
 
 
f6c0899
559f804
 
eff2619
559f804
 
 
 
 
eff2619
559f804
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: mit
task_categories:
- video-text-to-text
language:
- en
size_categories:
- 1K<n<10K
---
<!-- <div align="center">
  <h1>RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video</h1> 
</div> -->

<!-- [![hf_checkpoint](https://img.shields.io/badge/馃-RTV--Bench-9C276A.svg)](https://huggingface.co/datasets/xunsh/RTV-Bench)  -->

<!-- [![ms_checkpoint](https://img.shields.io/badge/馃-RTV--Bench-8A2BE2.svg)](https://www.modelscope.cn/datasets/Jungang/RTV-Bench)   -->
<!-- ## 馃敟 News
* **`2025.05.03`** 馃専 We are happy to release the RTV-Bench.
  
## TODO
- [ ] Release the final label json.
- [ ] Release the evaluation code.
- [ ] Construct a more comprehensive benchmark for real-time video analysis.
- [ ] 路路路
## 馃憖 RTV-Bench Overview

We introduce RTV-Bench, a fine-grained benchmark for MLLM real-time video analysis, which contains **552** videos (167.2 hours) and **4,631** high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost $\mathcal{RTV}\text{-}Bench$ performance, sometimes causing slight decreases. This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs.  $\mathcal{RTV}\text{-}Bench$ includes three key principles: 
* **Multi-Timestamp Question Answering (MTQA)**, where answers evolve with scene changes; 
* **Hierarchical Question Structure**, combining basic and advanced queries; and
* **Multi-dimensional Evaluation**, assessing the ability of continuous perception, understanding, and reasoning.  -->


<!-- ## 馃専 Star History

[![Star History Chart](https://api.star-history.com/svg?repos=LJungang/RTV-Bench&type=Date)](https://star-history.com/#LJungang/RTV-Bench&Date)

If you find our work helpful for your research, please consider citing our work.   

} -->
```