--- task_categories: - question-answering language: - en size_categories: - 1K StreamingBench Banner
🏠 Project Page | πŸ“„ arXiv Paper | πŸ“¦ Dataset | πŸ…Leaderboard
**StreamingBench** evaluates **Multimodal Large Language Models (MLLMs)** in real-time, streaming video understanding tasks. 🌟 ------ [**NEW!** 2025.05.15] πŸ”₯: [Seed1.5-VL](https://github.com/ByteDance-Seed/Seed1.5-VL) achieved ALL model SOTA with a score of 82.80 on the Proactive Output. [**NEW!** 2025.03.17] ⭐: [ViSpeeker](https://arxiv.org/abs/2503.12769) achieved Open-Source SOTA with a score of 61.60 on the Omni-Source Understanding. [**NEW!** 2025.01.14] πŸš€: [MiniCPM-o 2.6](https://github.com/OpenBMB/MiniCPM-o) achieved Streaming SOTA with a score of 66.01 on the Overall benchmark. [**NEW!** 2025.01.06] πŸ†: [Dispider](https://github.com/Mark12Ding/Dispider) achieved Streaming SOTA with a score of 53.12 on the Overall benchmark. [**NEW!** 2024.12.09] πŸŽ‰: [InternLM-XComposer2.5-OmniLive](https://github.com/InternLM/InternLM-XComposer) achieved 73.79 on Real-Time Visual Understanding. ------ ## 🎞️ Overview As MLLMs continue to advance, they remain largely focused on offline video comprehension, where all frames are pre-loaded before making queries. However, this is far from the human ability to process and respond to video streams in real-time, capturing the dynamic nature of multimedia content. To bridge this gap, **StreamingBench** introduces the first comprehensive benchmark for streaming video understanding in MLLMs. ### Key Evaluation Aspects - 🎯 **Real-time Visual Understanding**: Can the model process and respond to visual changes in real-time? - πŸ”Š **Omni-source Understanding**: Does the model integrate visual and audio inputs synchronously in real-time video streams? - 🎬 **Contextual Understanding**: Can the model comprehend the broader context within video streams? ### Dataset Statistics - πŸ“Š **900** diverse videos - πŸ“ **4,500** human-annotated QA pairs - ⏱️ Five questions per video at different timestamps #### 🎬 Video Categories
Video Categories
#### πŸ” Task Taxonomy
Task Taxonomy
## πŸ”¬ Experimental Results ### Performance of Various MLLMs on StreamingBench - All Context
Task Taxonomy
- 60 seconds of context preceding the query time
Task Taxonomy
- Comparison of Main Experiment vs. 60 Seconds of Video Context -
Task Taxonomy
### Performance of Different MLLMs on the Proactive Output Task *"≀ xs" means that the answer is considered correct if the actual output time is within x seconds of the ground truth.*
Task Taxonomy
## πŸ“ Citation ```bibtex @article{lin2024streaming, title={StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding}, author={Junming Lin and Zheng Fang and Chi Chen and Zihao Wan and Fuwen Luo and Peng Li and Yang Liu and Maosong Sun}, journal={arXiv preprint arXiv:2411.03628}, year={2024} } ``` https://arxiv.org/abs/2411.03628